Making the Write Connections: Linking Writing Support Tools with Writer's Needs
Zixin Zhao
Young-Ho Kim
Gerald Penn
Fanny Chevalier
This work sheds light on whether and how creative writers' needs are met by existing research and commercial writing support tools (WST). We… (voir plus) conducted a need finding study to gain insight into the writers' process during creative writing through a qualitative analysis of the response from an online questionnaire and Reddit discussions on r/Writing. Using a systematic analysis of 115 tools and 67 research papers, we map out the landscape of how digital tools facilitate the writing process. Our triangulation of data reveals that research predominantly focuses on the writing activity and overlooks pre-writing activities and the importance of visualization. We distill 10 key takeaways to inform future research on WST and point to opportunities surrounding underexplored areas. Our work offers a holistic and up-to-date account of how tools have transformed the writing process, guiding the design of future tools that address writers' evolving and unmet needs.
Making the Write Connections: Linking Writing Support Tools with Writer Needs
Zixin Zhao
Young-Ho Kim
Gerald Penn
Fanny Chevalier
Making the Write Connections: Linking Writing Support Tools with Writer's Needs
Zixin Zhao
Young-Ho Kim
Gerald Penn
Fanny Chevalier
Multilingual Language Model Pretraining using Machine-translated Data
Jiayi Wang
Yao Lu
Maurice Weber
Max Ryabinin
Yihong Chen
Raphael Tang
Pontus Stenetorp
Multilingual Language Model Pretraining using Machine-translated Data
Jiayi Wang
Yao Lu
Maurice Weber
Max Ryabinin
Yihong Chen
Raphael Tang
Pontus Stenetorp
Random Forest Autoencoders for Guided Representation Learning
Kevin R. Moon
Jake S. Rhodes
Decades of research have produced robust methods for unsupervised data visualization, yet supervised visualization…
Random Forest Autoencoders for Guided Representation Learning
Kevin R. Moon
Jake S. Rhodes
Decades of research have produced robust methods for unsupervised data visualization, yet supervised visualization…
Adversarial Alignment for LLMs Requires Simpler, Reproducible, and More Measurable Objectives
Yan Scholten
Tom Wollschlager
Stephen Casper
Stephan Günnemann
Adversarial Alignment for LLMs Requires Simpler, Reproducible, and More Measurable Objectives
Yan Scholten
Tom Wollschlager
Stephen Casper
Stephan Günnemann
Misaligned research objectives have considerably hindered progress in adversarial robustness research over the past decade. For instance, an… (voir plus) extensive focus on optimizing target metrics, while neglecting rigorous standardized evaluation, has led researchers to pursue ad-hoc heuristic defenses that were seemingly effective. Yet, most of these were exposed as flawed by subsequent evaluations, ultimately contributing little measurable progress to the field. In this position paper, we illustrate that current research on the robustness of large language models (LLMs) risks repeating past patterns with potentially worsened real-world implications. To address this, we argue that realigned objectives are necessary for meaningful progress in adversarial alignment. To this end, we build on established cybersecurity taxonomy to formally define differences between past and emerging threat models that apply to LLMs. Using this framework, we illustrate that progress requires disentangling adversarial alignment into addressable sub-problems and returning to core academic principles, such as measureability, reproducibility, and comparability. Although the field presents significant challenges, the fresh start on adversarial robustness offers the unique opportunity to build on past experience while avoiding previous mistakes.
Automatic Pruning of Fine-tuning Datasets for Transformer-based Language Models
Sayed Mohammadreza Tayaranian Hosseini
Seyyed Hasan Mozafari
Brett H. Meyer
James J. Clark
Transformer-based language models have shown state-of-the-art performance on a variety of natural language understanding tasks. To achieve t… (voir plus)his performance, these models are first pre-trained on general corpus and then fine-tuned on downstream tasks. Previous work studied the effect of pruning the training set of the downstream tasks on the performance of the model on its evaluation set. In this work, we propose an automatic dataset pruning method for the training set of fine-tuning tasks. Our method is based on the model’s success rate in correctly classifying each training data point. Unlike previous work which relies on user feedback to determine subset size, our method automatically extracts training subsets that are adapted for each pair of model and fine-tuning task. Our method provides multiple subsets for use in dataset pruning that navigate the trade-off between subset size and evaluation accuracy. Our largest subset, which we also refer to as the winning ticket subset, is on average
BRIGHTER: BRIdging the Gap in Human-Annotated Textual Emotion Recognition Datasets for 28 Languages
Shamsuddeen Hassan Muhammad
Nedjma OUSIDHOUM
Idris Abdulmumin
Jan Philip Wahle
Terry Lima Ruas
Meriem Beloucif
Christine de Kock
Nirmal Surange
Daniela Teodorescu
Ibrahim Ahmad
Alham Fikri Aji
Felermino Ali
Ilseyar Alimova
Vladimir Araujo
Nikolay Babakov
Naomi Baes
Ana-Maria Bucur
Andiswa Bukula
Guanqun Cao … (voir 28 de plus)
Rodrigo Tufino Cardenas
Rendi Chevi
Chiamaka Ijeoma Chukwuneke
Alexandra Ciobotaru
Daryna Dementieva
Murja Sani Gadanya
Robert Geislinger
Bela Gipp
Oumaima Hourrane
Oana Ignat
Falalu Lawan
Rooweither Mabuya
Rahmad Mahendra
Vukosi Marivate
Andrew Piper
Alexander Panchenko
Charles Henrique Porto Ferreira
Vitaly Protasov
Samuel Rutunda
Manish Shrivastava
Aura Cristina Udrea
Lilian D. A. Wanzare
Sophie Wu
Florian Valentin Wunderlich
Hanif Muhammad Zhafran
Tianhui Zhang
Yi Zhou
Saif M. Mohammad
BRIGHTER: BRIdging the Gap in Human-Annotated Textual Emotion Recognition Datasets for 28 Languages
Shamsuddeen Hassan Muhammad
Nedjma OUSIDHOUM
Idris Abdulmumin
Jan Philip Wahle
Terry Lima Ruas
Meriem Beloucif
Christine de Kock
Nirmal Surange
Daniela Teodorescu
Ibrahim Ahmad
Alham Fikri Aji
Felermino Ali
Ilseyar Alimova
Vladimir Araujo
Nikolay Babakov
Naomi Baes
Ana-Maria Bucur
Andiswa Bukula
Guanqun Cao … (voir 28 de plus)
Rodrigo Tufino Cardenas
Rendi Chevi
Chiamaka Ijeoma Chukwuneke
Alexandra Ciobotaru
Daryna Dementieva
Murja Sani Gadanya
Robert Geislinger
Bela Gipp
Oumaima Hourrane
Oana Ignat
Falalu Lawan
Rooweither Mabuya
Rahmad Mahendra
Vukosi Marivate
Andrew Piper
Alexander Panchenko
Charles Henrique Porto Ferreira
Vitaly Protasov
Samuel Rutunda
Manish Shrivastava
Aura Cristina Udrea
Lilian D. A. Wanzare
Sophie Wu
Florian Valentin Wunderlich
Hanif Muhammad Zhafran
Tianhui Zhang
Yi Zhou
Saif M. Mohammad