"One-Size-Fits-All"? Examining Expectations around What Constitute"Fair"or"Good"NLG System Behaviors
Li Lucy
Su Lin Blodgett
Milad Shokouhi
Hanna Wallach
Fairness-related assumptions about what constitute appropriate NLG system behaviors range from invariance, where systems are expected to beh… (see more)ave identically for social groups, to adaptation, where behaviors should instead vary across them. To illuminate tensions around invariance and adaptation, we conduct five case studies, in which we perturb different types of identity-related language features (names, roles, locations, dialect, and style) in NLG system inputs. Through these cases studies, we examine people's expectations of system behaviors, and surface potential caveats of these contrasting yet commonly held assumptions. We find that motivations for adaptation include social norms, cultural differences, feature-specific information, and accommodation; in contrast, motivations for invariance include perspectives that favor prescriptivism, view adaptation as unnecessary or too difficult for NLG systems to do appropriately, and are wary of false assumptions. Our findings highlight open challenges around what constitute"fair"or"good"NLG system behaviors.
PCR191 Patient-Centric Assessment of Treatment Experience in Breast Cancer: Development and Validation of a Patient Questionnaire
K. Gurjar
B. Rattanavong
L. Bennetts
J. Sahota
M. Ouerghi
C. Ammendolea
J. Asselah
S. Bartlett
C. Brezden-Masley
J. Croke
T. Hijal
J. Papadakos
L. Watson
D. Soliman
Pioneering women in nuclear and radiation sciences
Mirta Dumancic
A responsible framework for applying artificial intelligence on medical images and signals at the point-of-care: the PACS-AI platform.
Pascal Thériault-Lauzier
Denis Cobin
Olivier Tastet
Élodie Labrecque Langlais
B. Taji
Guson Kang
A. Chong
Derek So
An Tang
J. W. Gichoya
Pierre-Luc Deziel
Samuel Kadoury
Robert Avram
A responsible framework for applying artificial intelligence on medical images and signals at the point-of-care: the PACS-AI platform.
Pascal Thériault-Lauzier
Denis Cobin
Olivier Tastet
Élodie Labrecque Langlais
B. Taji
Guson Kang
A. Chong
Derek So
An Tang
J. W. Gichoya
Pierre-Luc Deziel
Samuel Kadoury
Robert Avram
A responsible framework for applying artificial intelligence on medical images and signals at the point-of-care: the PACS-AI platform.
Pascal Thériault-Lauzier
Denis Cobin
Olivier Tastet
Élodie Labrecque Langlais
B. Taji
Guson Kang
A. Chong
Derek So
An Tang
J. W. Gichoya
Pierre-Luc Deziel
Samuel Kadoury
Robert Avram
Revisiting the 2023 wildfire season in Canada
Flavie Pelletier
Michael A. Wulder
Joanne C. White
Txomin Hermosilla
Revisiting the 2023 wildfire season in Canada
Flavie Pelletier
Michael A. Wulder
Joanne C. White
Txomin Hermosilla
State Soup: In-Context Skill Learning, Retrieval and Mixing
Maciej Pi'oro
Maciej Wolczyk
Johannes Von Oswald
João Sacramento
A new breed of gated-linear recurrent neural networks has reached state-of-the-art performance on a range of sequence modeling problems. Suc… (see more)h models naturally handle long sequences efficiently, as the cost of processing a new input is independent of sequence length. Here, we explore another advantage of these stateful sequence models, inspired by the success of model merging through parameter interpolation. Building on parallels between fine-tuning and in-context learning, we investigate whether we can treat internal states as task vectors that can be stored, retrieved, and then linearly combined, exploiting the linearity of recurrence. We study this form of fast model merging on Mamba-2.8b, a pretrained recurrent model, and present preliminary evidence that simple linear state interpolation methods suffice to improve next-token perplexity as well as downstream in-context learning task performance.
Transformers meet Neural Algorithmic Reasoners
Wilfried Bounsi
Borja Ibarz
Andrew Joseph Dudzik
Jessica B. Hamrick
Larisa Markeeva
Alex Vitvitskyi
Petar Veličković
Transformers have revolutionized machine learning with their simple yet effective architecture. Pre-training Transformers on massive text da… (see more)tasets from the Internet has led to unmatched generalization for natural language understanding (NLU) tasks. However, such language models remain fragile when tasked with algorithmic forms of reasoning, where computations must be precise and robust. To address this limitation, we propose a novel approach that combines the Transformer's language understanding with the robustness of graph neural network (GNN)-based neural algorithmic reasoners (NARs). Such NARs proved effective as generic solvers for algorithmic tasks, when specified in graph form. To make their embeddings accessible to a Transformer, we propose a hybrid architecture with a two-phase training procedure, allowing the tokens in the language model to cross-attend to the node embeddings from the NAR. We evaluate our resulting TransNAR model on CLRS-Text, the text-based version of the CLRS-30 benchmark, and demonstrate significant gains over Transformer-only models for algorithmic reasoning, both in and out of distribution.
Transformers need glasses! Information over-squashing in language tasks
Federico Barbero
Andrea Banino
Steven Kapturowski
Dharshan Kumaran
João Guilherme Madeira Araújo
Alex Vitvitskyi
Petar Veličković
We study how information propagates in decoder-only Transformers, which are the architectural backbone of most existing frontier large langu… (see more)age models (LLMs). We rely on a theoretical signal propagation analysis -- specifically, we analyse the representations of the last token in the final layer of the Transformer, as this is the representation used for next-token prediction. Our analysis reveals a representational collapse phenomenon: we prove that certain distinct sequences of inputs to the Transformer can yield arbitrarily close representations in the final token. This effect is exacerbated by the low-precision floating-point formats frequently used in modern LLMs. As a result, the model is provably unable to respond to these sequences in different ways -- leading to errors in, e.g., tasks involving counting or copying. Further, we show that decoder-only Transformer language models can lose sensitivity to specific tokens in the input, which relates to the well-known phenomenon of over-squashing in graph neural networks. We provide empirical evidence supporting our claims on contemporary LLMs. Our theory also points to simple solutions towards ameliorating these issues.
When does Self-Prediction help? Understanding Auxiliary Tasks in Reinforcement Learning
Claas Voelcker
Tyler Kastner
Igor Gilitschenski
Amir-massoud Farahmand
We investigate the impact of auxiliary learning tasks such as observation reconstruction and latent self-prediction on the representation le… (see more)arning problem in reinforcement learning. We also study how they interact with distractions and observation functions in the MDP. We provide a theoretical analysis of the learning dynamics of observation reconstruction, latent self-prediction, and TD learning in the presence of distractions and observation functions under linear model assumptions. With this formalization, we are able to explain why latent-self prediction is a helpful \emph{auxiliary task}, while observation reconstruction can provide more useful features when used in isolation. Our empirical analysis shows that the insights obtained from our learning dynamics framework predicts the behavior of these loss functions beyond the linear model assumption in non-linear neural networks. This reinforces the usefulness of the linear model framework not only for theoretical analysis, but also practical benefit for applied problems.