Publications

SDLog: A Deep Learning Framework for Detecting Sensitive Information in Software Logs
Roozbeh Aghili
Xingfang Wu
Heng Li
Search-Based Correction of Reasoning Chains for Language Models
Jean-Pierre R. Falet
Oliver E. Richardson
Moksh J. Jain
Sungsoo Ahn
Seeing the Unseen: How EMoE Unveils Bias in Text-to-Image Diffusion Models
Lucas Berry
Axel Brando
Wei-Di Chang
Juan Higuera
Self-Evolving Curriculum for LLM Reasoning
Self-Play $Q$-Learners Can Provably Collude in the Iterated Prisoner's Dilemma
Juan Agustin Duque
Emilio Calvano
A growing body of computational studies shows that simple machine learning agents converge to cooperative behaviors in social dilemmas, such… (voir plus) as collusive price-setting in oligopoly markets, raising questions about what drives this outcome. In this work, we provide theoretical foundations for this phenomenon in the context of self-play multi-agent Q-learners in the iterated prisoner’s dilemma. We characterize broad conditions under which such agents provably learn the cooperative Pavlov (win-stay, lose-shift) policy rather than the Pareto-dominated “always defect” policy. We validate our theoretical results through additional experiments, demonstrating their robustness across a broader class of deep learning algorithms.
Structure-Aligned Protein Language Model
Can Chen
David Heurtel-Depeiges
Robert M. Vernon
Christopher J. Langmead
On the generalization of language models from in-context learning and finetuning: a controlled study
Andrew Lampinen
Arslan Chaudhry
Stephanie C.Y. Chan
Cody Wild
Diane Wan
Alexander Y. Ku
Alex Ku
Murray P. Shanahan
James L McClelland
On the generalization of language models from in-context learning and finetuning: a controlled study
Andrew Lampinen
Arslan Chaudhry
Stephanie C.Y. Chan
Cody Wild
Diane Wan
Alexander Y. Ku
Murray P. Shanahan
James L McClelland
Large language models exhibit exciting capabilities, yet can show surprisingly narrow generalization from finetuning. E.g. they can fail to … (voir plus)generalize to simple reversals of relations they are trained on, or fail to make simple logical deductions based on trained information. These failures to generalize from fine-tuning can hinder practical application of these models. On the other hand, language models' in-context learning shows different inductive biases, and can generalize better in some cases. Here, we explore these differences in generalization between in-context- and fine-tuning-based learning. To do so, we constructed several novel datasets to evaluate and improve models' abilities to generalize from finetuning data. The datasets are designed to create clean tests of generalization, by isolating the knowledge in the dataset from that in pretraining. We expose pretrained large models to controlled subsets of the information in these datasets -- either in context, or through fine-tuning -- and evaluate their performance on test sets that require various types of generalization. We find overall that in data-matched settings, in-context learning can generalize more flexibly than fine-tuning (though we also find some qualifications of prior findings, such as cases when fine-tuning can generalize to reversals embedded in a larger structure of knowledge). We build on these findings to propose a method to enable improved generalization from fine-tuning: adding in-context inferences to finetuning data. We show that this method improves generalization across various splits of our datasets and other benchmarks. Our results have implications for understanding the inductive biases of different modes of learning in language models, and practically improving their performance.
The NaijaVoices Dataset: Cultivating Large-Scale, High-Quality, Culturally-Rich Speech Data for African Languages
The NaijaVoices Community
Busayo Awobade
Abraham Owodunni
Handel Emezue
Gloria Monica Tobechukwu Emezue
N. N. Emezue
Sewade Ogun
Bunmi Akinremi
TypyBench: Evaluating LLM Type Inference for Untyped Python Repositories
Honghua Dong
Jiacheng Yang
Xun Deng
Yuhe Jiang
Gennady Pekhimenko
Fan Long
Uncovering a Universal Abstract Algorithm for Modular Addition in Neural Networks
We propose a testable universality hypothesis, asserting that seemingly disparate neural network solutions observed in the simple task of mo… (voir plus)dular addition are unified under a common abstract algorithm. While prior work interpreted variations in neuron-level representations as evidence for distinct algorithms, we demonstrate - through multi-level analyses spanning neurons, neuron clusters, and entire networks - that multilayer perceptrons and transformers universally implement the abstract algorithm we call the approximate Chinese Remainder Theorem. Crucially, we introduce approximate cosets and show that neurons activate exclusively on them. Furthermore, our theory works for deep neural networks (DNNs). It predicts that universally learned solutions in DNNs with trainable embeddings or more than one hidden layer require only O(log n) features, a result we empirically confirm. This work thus provides the first theory-backed interpretation of multilayer networks solving modular addition. It advances generalizable interpretability and opens a testable universality hypothesis for group multiplication beyond modular addition.
Virtual Cells: Predict, Explain, Discover
Emmanuel Noutahi
Jason Hartford
Ali Denton
Kristina Ulicna
Michael Craig
Jonathan Hsu
Michael Cuccarese
Christopher Gibson
Daniel Cohen
Berton Earnshaw