Publications

Efficient Adversarial Training in LLMs with Continuous Attacks
Sophie Xhonneux
Stephan Günnemann
Leo Schwinn
Large language models (LLMs) are vulnerable to adversarial attacks that can bypass their safety guardrails. In many domains, adversarial tra… (voir plus)ining has proven to be one of the most promising methods to reliably improve robustness against such attacks. Yet, in the context of LLMs, current methods for adversarial training are hindered by the high computational costs required to perform discrete adversarial attacks at each training iteration. We address this problem by instead calculating adversarial attacks in the continuous embedding space of the LLM, which is orders of magnitudes more efficient. We propose a fast adversarial training algorithm (C-AdvUL) composed of two losses: the first makes the model robust on continuous embedding attacks computed on an adversarial behaviour dataset; the second ensures the usefulness of the final model by fine-tuning on utility data. Moreover, we introduce C-AdvIPO, an adversarial variant of IPO that does not require utility data for adversarially robust alignment. Our empirical evaluation on four models from different families (Gemma, Phi3, Mistral, Zephyr) and at different scales (2B, 3.8B, 7B) shows that both algorithms substantially enhance LLM robustness against discrete attacks (GCG, AutoDAN, PAIR), while maintaining utility. Our results demonstrate that robustness to continuous perturbations can extrapolate to discrete threat models. Thereby, we present a path toward scalable adversarial training algorithms for robustly aligning LLMs.
Neural computations in prosopagnosia
Simon Faghel-Soubeyrand
Anne-Raphaelle Richoz
Delphine Waeber
Jessica Woodhams
Roberto Caldara
Frédéric Gosselin
We aimed to identify neural computations underlying the loss of face identification ability by modelling the brain activity of brain-lesione… (voir plus)d patient PS, a well-documented case of acquired pure prosopagnosia. We collected a large dataset of high-density electrophysiological (EEG) recordings from PS and neurotypicals while they completed a one-back task on a stream of face, object, animal and scene images. We found reduced neural decoding of face identity around the N170 window in PS, and conjointly revealed normal non-face identification in this patient. We used Representational Similarity Analysis (RSA) to correlate human EEG representations with those of deep neural network (DNN) models of vision and caption-level semantics, offering a window into the neural computations at play in patient PS’s deficits. Brain representational dissimilarity matrices (RDMs) were computed for each participant at 4 ms steps using cross-validated classifiers. PS’s brain RDMs showed significant reliability across sessions, indicating meaningful measurements of brain representations with RSA even in the presence of significant lesions. Crucially, computational analyses were able to reveal PS’s representational deficits in high-level visual and semantic brain computations. Such multi-modal data-driven characterisations of prosopagnosia highlight the complex nature of processes contributing to face recognition in the human brain. Highlights We assess the neural computations in the prosopagnosic patient PS using EEG, RSA, and deep neural networks Neural dynamics of brain-lesioned PS are reliably captured using RSA Neural decoding shows normal evidence for non-face individuation in PS Neural decoding shows abnormal neural evidence for face individuation in PS PS shows impaired high-level visual and semantic neural computations
Predicting the Impact of Model Expansion through the Minima Manifold: A Loss Landscape Perspective
Pranshu Malviya
Jerry Huang
Quentin Fournier
The optimal model for a given task is often challenging to determine, requiring training multiple models from scratch which becomes prohibit… (voir plus)ive as dataset and model sizes grow. A more efficient alternative is to reuse smaller pre-trained models by expanding them, however, this is not widely adopted as how this impacts training dynamics remains poorly understood. While prior works have introduced statistics to measure these effects, they remain flawed. To rectify this, we offer a new approach for understanding and quantifying the impact of expansion through the lens of the loss landscape, which has been shown to contain a manifold of linearly connected minima. Building on this new perspective, we propose a metric to study the impact of expansion by estimating the size of the manifold. Experimental results show a clear relationship between gains in performance and manifold size, enabling the comparison of candidate models and presenting a first step towards expanding models more reliably based on geometric properties of the loss landscape.
Assortment Optimization with Visibility Constraints
Théo Barré
Omar El Housni
Attention as an RNN
Leo Feng
Frederick Tung
Hossein Hajimirsadeghi
Mohamed Osama Ahmed
Greg Mori
Mining Action Rules for Defect Reduction Planning
Khouloud Oueslati
gabriel laberge
Maxime Lamothe
Defect reduction planning plays a vital role in enhancing software quality and minimizing software maintenance costs. By training a black bo… (voir plus)x machine learning model and"explaining"its predictions, explainable AI for software engineering aims to identify the code characteristics that impact maintenance risks. However, post-hoc explanations do not always faithfully reflect what the original model computes. In this paper, we introduce CounterACT, a Counterfactual ACTion rule mining approach that can generate defect reduction plans without black-box models. By leveraging action rules, CounterACT provides a course of action that can be considered as a counterfactual explanation for the class (e.g., buggy or not buggy) assigned to a piece of code. We compare the effectiveness of CounterACT with the original action rule mining algorithm and six established defect reduction approaches on 9 software projects. Our evaluation is based on (a) overlap scores between proposed code changes and actual developer modifications; (b) improvement scores in future releases; and (c) the precision, recall, and F1-score of the plans. Our results show that, compared to competing approaches, CounterACT's explainable plans achieve higher overlap scores at the release level (median 95%) and commit level (median 85.97%), and they offer better trade-off between precision and recall (median F1-score 88.12%). Finally, we venture beyond planning and explore leveraging Large Language models (LLM) for generating code edits from our generated plans. Our results show that suggested LLM code edits supported by our plans are actionable and are more likely to pass relevant test cases than vanilla LLM code recommendations.
Static graph approximations of dynamic contact networks for epidemic forecasting
Razieh Shirzadkhani
Shenyang Huang
Abby Leung
Investigation of dosimetric characteristics of radiochromic film in response to alpha particles emitted from Americium-241.
Victor D. Diaz‐Martinez
Mélodie Cyr
S. Devic
Nada Tomic
David F. Lewis
BACKGROUND In radiotherapy, it is essential to deliver prescribed doses to tumors while minimizing damage to surrounding healthy tissue. Acc… (voir plus)urate measurements of absorbed dose are required for this purpose. Gafchromic® external beam therapy (EBT) radiochromic films have been widely used in radiotherapy. While the dosimetric characteristics of the EBT3 model film have been extensively studied for photon and charged particle beams (protons, electrons, and carbon ions), little research has been done on α
BitPruning: Learning Bitlengths for Aggressive and Accurate Quantization
Miloš Nikolić
Ghouthi Boukli Hacene
Ciaran Bannon
Alberto Delmas Lascorz
Matthieu Courbariaux
Omar Mohamed Awad
Isak Edo Vivancos
Vincent Gripon
Andreas Moshovos
Neural networks have demonstrably achieved state-of-the art accuracy using low-bitlength integer quantization, yielding both execution time … (voir plus)and energy benefits on existing hardware designs that support short bitlengths. However, the question of finding the minimum bitlength for a desired accuracy remains open. We introduce a training method for minimizing inference bitlength at any granularity while maintaining accuracy. Namely, we propose a regularizer that penalizes large bitlength representations throughout the architecture and show how it can be modified to minimize other quantifiable criteria, such as number of operations or memory footprint. We demonstrate that our method learns thrifty representations while maintaining accuracy. With ImageNet, the method produces an average per layer bitlength of 4.13, 3.76 and 4.36 bits on AlexNet, ResNet18 and MobileNet V2 respectively, remaining within 2.0%, 0.5% and 0.5% of the base TOP-1 accuracy.
Coordination among leaf and fine-root traits along a strong natural soil fertility gradient
Xavier Guilbeault-Mayers
Hans Lambers
Implementing a Hierarchical Deep Learning Approach for Simulating multilevel Auction Data
Marcelin Joanis
Igor Sadoune
Towards Modular LLMs by Building and Reusing a Library of LoRAs
Oleksiy Ostapenko
Zhan Su
Edoardo Ponti
Matheus Pereira
Lucas Caccia
The growing number of parameter-efficient adaptations of a base large language model (LLM) calls for studying whether we can reuse such trai… (voir plus)ned adapters to improve performance for new tasks. We study how to best build a library of adapters given multi-task data and devise techniques for both zero-shot and supervised task generalization through routing in such library. We benchmark existing approaches to build this library and introduce model-based clustering, MBC, a method that groups tasks based on the similarity of their adapter parameters, indirectly optimizing for transfer across the multi-task dataset. To re-use the library, we present a novel zero-shot routing mechanism, Arrow, which enables dynamic selection of the most relevant adapters for new inputs without the need for retraining. We experiment with several LLMs, such as Phi-2 and Mistral, on a wide array of held-out tasks, verifying that MBC-based adapters and Arrow routing lead to superior generalization to new tasks. We make steps towards creating modular, adaptable LLMs that can match or outperform traditional joint training.