Publications

Understanding In-Context Learning of Linear Models in Transformers Through an Adversarial Lens
Usman Anwar
Johannes Von Oswald
Louis Kirsch
Spencer Frei
In this work, we make two contributions towards understanding of in-context learning of linear models by transformers. First, we investigate… (voir plus) the adversarial robustness of in-context learning in transformers to hijacking attacks — a type of adversarial attacks in which the adversary’s goal is to manipulate the prompt to force the transformer to generate a specific output. We show that both linear transformers and transformers with GPT-2 architectures are vulnerable to such hijacking attacks. However, adversarial robustness to such attacks can be significantly improved through adversarial training --- done either at the pretraining or finetuning stage --- and can generalize to stronger attack models. Our second main contribution is a comparative analysis of adversarial vulnerabilities across transformer models and other algorithms for learning linear models. This reveals two novel findings. First, adversarial attacks transfer poorly between larger transformer models trained from different seeds despite achieving similar in-distribution performance. This suggests that transformers of the same architecture trained according to the same recipe may implement different in-context learning algorithms for the same task. Second, we observe that attacks do not transfer well between classical learning algorithms for linear models (single-step gradient descent and ordinary least squares) and transformers. This suggests that there could be qualitative differences between the in-context learning algorithms that transformers implement and these traditional algorithms.
Understanding In-Context Learning of Linear Models in Transformers Through an Adversarial Lens
Usman Anwar
Johannes Von Oswald
Louis Kirsch
Spencer Frei
In this work, we make two contributions towards understanding of in-context learning of linear models by transformers. First, we investigate… (voir plus) the adversarial robustness of in-context learning in transformers to hijacking attacks — a type of adversarial attacks in which the adversary’s goal is to manipulate the prompt to force the transformer to generate a specific output. We show that both linear transformers and transformers with GPT-2 architectures are vulnerable to such hijacking attacks. However, adversarial robustness to such attacks can be significantly improved through adversarial training --- done either at the pretraining or finetuning stage --- and can generalize to stronger attack models. Our second main contribution is a comparative analysis of adversarial vulnerabilities across transformer models and other algorithms for learning linear models. This reveals two novel findings. First, adversarial attacks transfer poorly between larger transformer models trained from different seeds despite achieving similar in-distribution performance. This suggests that transformers of the same architecture trained according to the same recipe may implement different in-context learning algorithms for the same task. Second, we observe that attacks do not transfer well between classical learning algorithms for linear models (single-step gradient descent and ordinary least squares) and transformers. This suggests that there could be qualitative differences between the in-context learning algorithms that transformers implement and these traditional algorithms.
Infrared Object Detection with Ultra Small ConvNets: Is ImageNet Pretraining Still Useful?
Srikanth Muralidharan
Heitor Rapela Medeiros
Masih Aminbeidokhti
Eric Granger
Many real-world applications require recognition models that are robust to different operational conditions and modalities, but at the same … (voir plus)time run on small embedded devices, with limited hardware. While for normal size models, pre-training is known to be very beneficial in accuracy and robustness, for small models, that can be employed for embedded and edge devices, its effect is not clear. In this work, we investigate the effect of ImageNet pretraining on increasingly small backbone architectures (ultra-small models, with
Infrared Object Detection with Ultra Small ConvNets: Is ImageNet Pretraining Still Useful?
Srikanth Muralidharan
Heitor Rapela Medeiros
Masih Aminbeidokhti
Eric Granger
Many real-world applications require recognition models that are robust to different operational conditions and modalities, but at the same … (voir plus)time run on small embedded devices, with limited hardware. While for normal size models, pre-training is known to be very beneficial in accuracy and robustness, for small models, that can be employed for embedded and edge devices, its effect is not clear. In this work, we investigate the effect of ImageNet pretraining on increasingly small backbone architectures (ultra-small models, with
A Guide to Misinformation Detection Data and Evaluation
Gabrielle Péloquin-Skulski
James Zhou
Florence Laflamme
Yuxiang Guan
Misinformation is a complex societal issue, and mitigating solutions are difficult to create due to data deficiencies. To address this probl… (voir plus)em, we have curated the largest collection of (mis)information datasets in the literature, totaling 75. From these, we evaluated the quality of all of the 36 datasets that consist of statements or claims, as well as the 9 datasets that consists of data in purely paragraph form. We assess these datasets to identify those with solid foundations for empirical work and those with flaws that could result in misleading and non-generalizable results, such as insufficient label quality, spurious correlations. We further provide state-of-the-art baselines on all these datasets, but show that regardless of label quality, categorical labels may no longer give an accurate evaluation of detection model performance. We discuss alternatives to mitigate this problem. Overall, this guide aims to provide a roadmap for obtaining higher quality data and conducting more effective evaluations, ultimately improving research in misinformation detection. All datasets and other artifacts are available at [anonymized].
A Guide to Misinformation Detection Data and Evaluation
Gabrielle Péloquin-Skulski
James Zhou
Florence Laflamme
Luke Yuxiang Guan
Responsible AI Day
Ebrahim Bagheri
Faezeh Ensan
Calvin Hillis
Robin Cohen
Sébastien Gambs
Responsible AI Day
Ebrahim Bagheri
Faezeh Ensan
Calvin Hillis
Robin Cohen
Sébastien Gambs
Revisiting Replay and Gradient Alignment for Continual Pre-Training of Large Language Models
Istabrak Abbes
Matthew D Riemer
Tsuguchika Tabaru
Hiroaki Kingetsu
Revisiting Replay and Gradient Alignment for Continual Pre-Training of Large Language Models
Istabrak Abbes
Matthew D Riemer
Tsuguchika Tabaru
Hiroaki Kingetsu
Training large language models (LLMs) typically involves pre-training on massive corpora, only to restart the process entirely when new data… (voir plus) becomes available. A more efficient and resource-conserving approach would be continual pre-training, where models are updated with new data rather than retraining from scratch. However, the introduction of new data often causes distribution shifts, leading to performance degradation on previously learned tasks. In this paper, we take a deeper look at two popular proposals for addressing this distribution shift within the continual learning literature: experience replay and gradient alignment. We consider continual pre-training of models within the Llama family of architectures at a large scale across languages with 100 billion tokens of training data in each language, finding that both replay and gradient alignment lead to more stable learning without forgetting. This conclusion holds both as we vary the model scale and as we vary the number and diversity of tasks. Moreover, we are the first to demonstrate the effectiveness of gradient alignment techniques in the context of LLM pre-training and propose an efficient implementation of meta-experience replay (MER) that imbues experience replay with the benefits of gradient alignment despite negligible compute and memory overhead. Our scaling analysis across model sizes and replay rates indicates that small rates of replaying old examples are definitely a more valuable use of compute than investing in model size, but that it is more compute efficient to scale the size of the model than invest in high rates of replaying old examples.
Temporal Graph Learning Workshop
Daniele Zambon
Andrea Cini
Julia Gastinger
Micheal Bronstein
Temporal Graph Learning Workshop
Daniele Zambon
Andrea Cini
Julia Gastinger
Michael Bronstein