Publications

A Novel Bifurcation Method for Observation Perturbation Attacks on Reinforcement Learning Agents: Load Altering Attacks on a Cyber Physical Power System
Kiernan Broda-Milian
Ranwa Al-Mallah
Components of cyber physical systems, which affect real-world processes, are often exposed to the internet. Replacing conventional control m… (see more)ethods with Deep Reinforcement Learning (DRL) in energy systems is an active area of research, as these systems become increasingly complex with the advent of renewable energy sources and the desire to improve their efficiency. Artificial Neural Networks (ANN) are vulnerable to specific perturbations of their inputs or features, called adversarial examples. These perturbations are difficult to detect when properly regularized, but have significant effects on the ANN's output. Because DRL uses ANN to map optimal actions to observations, they are similarly vulnerable to adversarial examples. This work proposes a novel attack technique for continuous control using Group Difference Logits loss with a bifurcation layer. By combining aspects of targeted and untargeted attacks, the attack significantly increases the impact compared to an untargeted attack, with drastically smaller distortions than an optimally targeted attack. We demonstrate the impacts of powerful gradient-based attacks in a realistic smart energy environment, show how the impacts change with different DRL agents and training procedures, and use statistical and time-series analysis to evaluate attacks' stealth. The results show that adversarial attacks can have significant impacts on DRL controllers, and constraining an attack's perturbations makes it difficult to detect. However, certain DRL architectures are far more robust, and robust training methods can further reduce the impact.
Unraveling Radiomics Complexity: Strategies for Optimal Simplicity in Predictive Modeling
Mahdi A. L. Loutfi
Teodora Boblea Podasca
Alex Zwanenburg
Taman Upadhaya
Jorge Barrios
David R Raleigh
William C. Chen
Dante P. I. Capaldi
Hong Zheng
Olivier Gevaert
Jing Wu
Alvin C. Silva
Paul J. Zhang
Harrison X. Bai
Jan Seuntjens
Steffen Löck
Patrick O. Richard
Olivier Morin
Caroline Reinhold
Martin Lepage … (see 1 more)
Background: The high dimensionality of radiomic feature sets, the variability in radiomic feature types and potentially high computational r… (see more)equirements all underscore the need for an effective method to identify the smallest set of predictive features for a given clinical problem. Purpose: Develop a methodology and tools to identify and explain the smallest set of predictive radiomic features. Materials and Methods: 89,714 radiomic features were extracted from five cancer datasets: low-grade glioma, meningioma, non-small cell lung cancer (NSCLC), and two renal cell carcinoma cohorts (n=2104). Features were categorized by computational complexity into morphological, intensity, texture, linear filters, and nonlinear filters. Models were trained and evaluated on each complexity level using the area under the curve (AUC). The most informative features were identified, and their importance was explained. The optimal complexity level and associated most informative features were identified using systematic statistical significance analyses and a false discovery avoidance procedure, respectively. Their predictive importance was explained using a novel tree-based method. Results: MEDimage, a new open-source tool, was developed to facilitate radiomic studies. Morphological features were optimal for MRI-based meningioma (AUC: 0.65) and low-grade glioma (AUC: 0.68). Intensity features were optimal for CECT-based renal cell carcinoma (AUC: 0.82) and CT-based NSCLC (AUC: 0.76). Texture features were optimal for MRI-based renal cell carcinoma (AUC: 0.72). Tuning the Hounsfield unit range improved results for CECT-based renal cell carcinoma (AUC: 0.86). Conclusion: Our proposed methodology and software can estimate the optimal radiomics complexity level for specific medical outcomes, potentially simplifying the use of radiomics in predictive modeling across various contexts.
Design smells in multi-language systems and bug-proneness: a survival analysis
Mouna Abidi
Md Saidur Rahman
Moses Openja
On Generalization for Generative Flow Networks
Anas Krichel
Nikolay Malkin
Salem Lahlou
Generative Flow Networks (GFlowNets) have emerged as an innovative learning paradigm designed to address the challenge of sampling from an u… (see more)nnormalized probability distribution, called the reward function. This framework learns a policy on a constructed graph, which enables sampling from an approximation of the target probability distribution through successive steps of sampling from the learned policy. To achieve this, GFlowNets can be trained with various objectives, each of which can lead to the model s ultimate goal. The aspirational strength of GFlowNets lies in their potential to discern intricate patterns within the reward function and their capacity to generalize effectively to novel, unseen parts of the reward function. This paper attempts to formalize generalization in the context of GFlowNets, to link generalization with stability, and also to design experiments that assess the capacity of these models to uncover unseen parts of the reward function. The experiments will focus on length generalization meaning generalization to states that can be constructed only by longer trajectories than those seen in training.
LORD: Low Rank Decomposition Of Monolingual Code LLMs For One-Shot Compression
Ayush Kaushal
Tejas Vaidhya
Low Rank Decomposition of matrix - splitting a large matrix into a product of two smaller matrix offers a means for compression that reduces… (see more) the parameters of a model without sparsification, and hence delivering more speedup on modern hardware. Moreover, unlike quantization, the compressed linear layers remain fully differentiable and all the parameters trainable, while being able to leverage the existing highly efficient kernels over floating point matrices. We study the potential to compress Large Language Models (LLMs) for monolingual Code generation via Low Rank Decomposition (LoRD) and observe that ranks for the linear layers in these models can be reduced by upto 39.58% with less than 1% increase in perplexity. We then use Low Rank Decomposition (LoRD) to compress StarCoder 16B to 13.2B parameter with no drop and to 12.3B with minimal drop in HumanEval Pass@1 score, in less than 10 minutes on a single A100. The compressed models speeds up inference by up to 22.35% with just a single line of change in code over huggingface's implementation with pytorch backend. Low Rank Decomposition (LoRD) models remain compatible with state of the art near-lossless quantization method such as SpQR, which allows leveraging further compression gains of quantization. Lastly, QLoRA over Low Rank Decomposition (LoRD) model further reduces memory requirements by as much as 21.2% over vanilla QLoRA while offering similar gains from parameter efficient fine tuning. Our work shows Low Rank Decomposition (LoRD) as a promising new paradigm for LLM compression.
Reinforcement Learning for Sequence Design Leveraging Protein Language Models
Jithendaraa Subramanian
Shiva Kanth Sujit
Niloy Irtisam
Umong Sain
Riashat Islam
The Effect of Data Corruption on Multimodal Long Form Responses
Daniel Z Kaplan
Alexis Roger
Mohamed Osman
Despite significant progress, Vision-Language Models (VLMs) still struggle with hallucinations, especially in long-form responses. Existing … (see more)strategies have had limited successes in specific cases, and long-form generation remains problematic. In this work we attempt to establish the link between the data used to train the model and the hallucinations in the model's output. To this end, we examine hallucinations through data corruption. We develop a method to corrupt training data and then train models with this data to see the effect on performance. We will show that corrupting only a small portion of the long-form training data significantly impairs the performance of the model on long-form tasks, while leaving simpler tasks like visual question-answering and multiple choice relatively intact. All training code and models are released for reproducibility and future research.
TriLM vs FloatLM: Ternary LLMs are more Performant than Quantized FP16 LLMs
Ayush Kaushal
Tejas Vaidhya
Tejas Pandey
Aaryan Bhagat
Ternary LLMs offer significantly better performance for their size (measured in bits) than the models trained and deployed in FP16/BF16. Giv… (see more)en the widespread usage of quantization before deployment and advancements in Post Training Quantization of LLMs, a pivotal question arises: do ternary LLMs indeed provide any discernible benefits? To address this, we first build an open family of pre-trained ternary Large Language Models (TriLM). Additionally, we include their counterparts pre-trained in FP16 (FloatLM) and quantized versions of FloatLM (QuantLM) with parameters across almost two orders of magnitude - from 99M to 3.9B parameters. We demonstrate that TriLMs with 3B+ parameters start to offer competitive performance compared to FloatLMs with the same parameter count, while providing significantly better performance for their size. Specifically, TriLM 3.9B, with less bits than FloatLM 830M, ranks between FloatLM 2.4B and FloatLM 3.9B when averaged across 6 popular commonsense and reasoning benchmarks. TriLMs also outperform quantized models, with TriLM 3.9B surpassing the larger QuantLM-3bit 3.9B. Furthermore, across knowledge-based benchmarks, TriLM maintains a superiority for its size, but lags for its parameter count. TriLM 3.9B falls halfway between FloatLM 1.5B and 2.4B, close to QuantLM-4bit 2.4B. To advance research on Ternary LMs, we open source over 500+ checkpoints across the model families.
VFA: Vision Frequency Analysis of Foundation Models and Human
Mohammad Javad Darvishi Bayazi
Md Rifat Arefin
Jocelyn Faubert
Machine learning models often struggle with distribution shifts in real-world scenarios, whereas humans exhibit robust adaptation. Models th… (see more)at better align with human perception may achieve higher out-of-distribution generalization. In this study, we investigate how various characteristics of large-scale computer vision models influence their alignment with human capabilities and robustness. Our findings indicate that increasing model and data size, along with incorporating rich semantic information and multiple modalities, significantly enhances models' alignment with human perception and their overall robustness. Our empirical analysis demonstrates a strong correlation between out-of-distribution accuracy and human alignment.
Automatic Segmentation of the Spinal Cord Nerve Rootlets
Jan Valošek
Theo Mathieu
Raphaëlle Schlienger
Olivia S. Kowalczyk
Precise identification of spinal nerve rootlets is relevant to delineate spinal levels for the study of functional activity in the spinal co… (see more)rd. The goal of this study was to develop an automatic method for the semantic segmentation of spinal nerve rootlets from T2-weighted magnetic resonance imaging (MRI) scans. Images from two open-access MRI datasets were used to train a 3D multi-class convolutional neural network using an active learning approach to segment C2-C8 dorsal nerve rootlets. Each output class corresponds to a spinal level. The method was tested on 3T T2-weighted images from datasets unseen during training to assess inter-site, inter-session, and inter-resolution variability. The test Dice score was 0.67 +- 0.16 (mean +- standard deviation across rootlets levels), suggesting a good performance. The method also demonstrated low inter-vendor and inter-site variability (coefficient of variation= 1.41 %), as well as low inter-session variability (coefficient of variation= 1.30 %) indicating stable predictions across different MRI
A Bayesian Non-Stationary Heteroskedastic Time Series Model for Multivariate Critical Care Data
Zayd Omar
David A. Stephens
Alexandra M. Schmidt
Temperature-dependent Spike-ACE2 interaction of Omicron subvariants is associated with viral transmission
Mehdi Benlarbi
Shilei Ding
Étienne Bélanger
Alexandra Tauzin
Raphael Poujol
Halima Medjahed
Omar El Ferri
Yuxia Bo
Catherine Bourassa
Judith Fafard
Marzena Pazgier
Inès Levade
Cameron Abrams
Marceline Côté
Andrés Finzi
The continued evolution of SARS-CoV-2 requires persistent monitoring of its subvariants. Omicron subvariants are responsible for the vast ma… (see more)jority of SARS-CoV-2 infections worldwide, with XBB and BA.2.86 sublineages representing more than 90% of circulating strains as of January 2024. In this study, we characterized the functional properties of Spike glycoproteins from BA.2.75, CH.1.1, DV.7.1, BA.4/5, BQ.1.1, XBB, XBB.1, XBB.1.16, XBB.1.5, FD.1.1, EG.5.1, HK.3 BA.2.86 and JN.1. We tested their capacity to evade plasma-mediated recognition and neutralization, ACE2 binding, their susceptibility to cold inactivation, Spike processing, as well as the impact of temperature on Spike-ACE2 interaction. We found that compared to the early wild-type (D614G) strain, most Omicron subvariants Spike glycoproteins evolved to escape recognition and neutralization by plasma from individuals who received a fifth dose of bivalent (BA.1 or BA.4/5) mRNA vaccine and improve ACE2 binding, particularly at low temperatures. Moreover, BA.2.86 had the best affinity for ACE2 at all temperatures tested. We found that Omicron subvariants Spike processing is associated with their susceptibility to cold inactivation. Intriguingly, we found that Spike-ACE2 binding at low temperature was significantly associated with growth rates of Omicron subvariants in humans. Overall, we report that Spikes from newly emerged Omicron subvariants are relatively more stable and resistant to plasma-mediated neutralization, present improved affinity for ACE2 which is associated, particularly at low temperatures, with their growth rates.