Publications

Bias-inducing geometries: an exactly solvable data model with fairness implications
Stefano Sarao Mannelli
Federica Gerace
Luca Saglietti
Revealing dynamic temporal trajectories and underlying regulatory networks with Cflows
Manik Kuchroo
Shabarni Gupta
Aarthi Venkat
Beatriz P. San Juan
Laura Rangel
Brandon Zhu
John G. Lock
Christine L. Chaffer
While single-cell technologies provide snapshots of tumor states, building continuous trajectories and uncovering causative gene regulatory … (see more)networks remains a significant challenge. We present Cflows, an AI framework that combines neural ODE networks with Granger causality to infer continuous cell state transitions and gene regulatory interactions from static scRNA-seq data. In a new 5-time point dataset capturing tumorsphere development over 30 days, Cflows reconstructs two types of trajectories leading to tumorsphere formation or apoptosis. Trajectory-based cell-of-origin analysis delineated a novel cancer stem cell profile characterized by CD44hiEPCAM+CAV1+, and uncovered a cell cycle–dependent enrichment of tumorsphere-initiating potential in G2/M or S-phase cells. Cflows uncovers ESRRA as a crucial causal driver of the tumor-forming gene regulatory network. Indeed, ESRRA inhibition significantly reduces tumor growth and metastasis in vivo. Cflows offers a powerful framework for uncovering cellular transitions and dynamic regulatory networks from static single-cell data.
Whither symbols in the era of advanced neural networks?
Thomas L. Griffiths
Brenden M. Lake
R. Thomas McCoy
Ellie Pavlick
Some of the strongest evidence that human minds should be thought about in terms of symbolic systems has been the way they combine ideas, pr… (see more)oduce novelty, and learn quickly. We argue that modern neural networks -- and the artificial intelligence systems built upon them -- exhibit similar abilities. This undermines the argument that the cognitive processes and representations used by human minds are symbolic, although the fact that these neural networks are typically trained on data generated by symbolic systems illustrates that such systems play an important role in characterizing the abstract problems that human minds have to solve. This argument leads us to offer a new agenda for research on the symbolic basis of human thought.
Whither symbols in the era of advanced neural networks?
Thomas L. Griffiths
Brenden M. Lake
R. Thomas McCoy
Ellie Pavlick
Some of the strongest evidence that human minds should be thought about in terms of symbolic systems has been the way they combine ideas, pr… (see more)oduce novelty, and learn quickly. We argue that modern neural networks -- and the artificial intelligence systems built upon them -- exhibit similar abilities. This undermines the argument that the cognitive processes and representations used by human minds are symbolic, although the fact that these neural networks are typically trained on data generated by symbolic systems illustrates that such systems play an important role in characterizing the abstract problems that human minds have to solve. This argument leads us to offer a new agenda for research on the symbolic basis of human thought.
Persistent Instability in LLM's Personality Measurements: Effects of Scale, Reasoning, and Conversation History
Yorguin-Jose Mantilla-Ramos
Mahmood Hegazy
Alberto Tosato
D. Lemay
Large language models require consistent behavioral patterns for safe deployment, yet their personality-like traits remain poorly understood… (see more). We present PERSIST (PERsonality Stability in Synthetic Text), a comprehensive evaluation framework testing 25+ open-source models (1B-671B parameters) across 500,000+ responses. Using traditional (BFI-44, SD3) and novel LLM-adapted personality instruments, we systematically vary question order, paraphrasing, personas, and reasoning modes. Our findings challenge fundamental deployment assumptions: (1) Even 400B+ models exhibit substantial response variability (SD>0.4); (2) Minor prompt reordering alone shifts personality measurements by up to 20%; (3) Interventions expected to stabilize behavior, such as chain-of-thought reasoning, detailed personas instruction, inclusion of conversation history, can paradoxically increase variability; (4) LLM-adapted instruments show equal instability to human-centric versions, confirming architectural rather than translational limitations. This persistent instability across scales and mitigation strategies suggests current LLMs lack the foundations for genuine behavioral consistency. For safety-critical applications requiring predictable behavior, these findings indicate that personality-based alignment strategies may be fundamentally inadequate.
Persistent Instability in LLM's Personality Measurements: Effects of Scale, Reasoning, and Conversation History
Yorguin-Jose Mantilla-Ramos
Mahmood Hegazy
Alberto Tosato
D. Lemay
Large language models require consistent behavioral patterns for safe deployment, yet their personality-like traits remain poorly understood… (see more). We present PERSIST (PERsonality Stability in Synthetic Text), a comprehensive evaluation framework testing 25+ open-source models (1B-671B parameters) across 500,000+ responses. Using traditional (BFI-44, SD3) and novel LLM-adapted personality instruments, we systematically vary question order, paraphrasing, personas, and reasoning modes. Our findings challenge fundamental deployment assumptions: (1) Even 400B+ models exhibit substantial response variability (SD>0.4); (2) Minor prompt reordering alone shifts personality measurements by up to 20%; (3) Interventions expected to stabilize behavior, such as chain-of-thought reasoning, detailed personas instruction, inclusion of conversation history, can paradoxically increase variability; (4) LLM-adapted instruments show equal instability to human-centric versions, confirming architectural rather than translational limitations. This persistent instability across scales and mitigation strategies suggests current LLMs lack the foundations for genuine behavioral consistency. For safety-critical applications requiring predictable behavior, these findings indicate that personality-based alignment strategies may be fundamentally inadequate.
Persistent Instability in LLM's Personality Measurements: Effects of Scale, Reasoning, and Conversation History
Yorguin-Jose Mantilla-Ramos
Mahmood Hegazy
Alberto Tosato
D. Lemay
Single-nucleus chromatin accessibility profiling identifies cell types and functional variants contributing to major depression
Anjali Chawla
Laura M. Fiori
Wenmin Zang
Malosree Maitra
Jennie Yang
Dariusz Żurawek
Gabriella Frosi
Reza Rahimian
Haruka Mitsuhashi
Maria Antonietta Davoli
Ryan Denniston
Gary Gang Chen
Volodymyr Yerko
Deborah Mash
Kiran Girdhar
Schahram Akbarian
Naguib Mechawar
Matthew Suderman
Corina Nagy
Gustavo Turecki
Single-nucleus chromatin accessibility profiling identifies cell types and functional variants contributing to major depression
Anjali Chawla
Laura M. Fiori
Wenmin Zang
Malosree Maitra
Jennie Yang
Dariusz Żurawek
Gabriella Frosi
Reza Rahimian
Haruka Mitsuhashi
Maria Antonietta Davoli
MA Davoli
Ryan Denniston
Gary Gang Chen
Volodymyr Yerko
Deborah Mash
Kiran Girdhar
Schahram Akbarian
Naguib Mechawar
Matthew Suderman … (see 3 more)
Corina Nagy
Gustavo Turecki
Single-nucleus chromatin accessibility profiling identifies cell types and functional variants contributing to major depression.
Anjali Chawla
Laura M. Fiori
Wenmin Zang
Malosree Maitra
Jennie Yang
Dariusz Żurawek
Gabriella Frosi
Reza Rahimian
Haruka Mitsuhashi
MA Davoli
Ryan Denniston
Gary Gang Chen
V. Yerko
Deborah Mash
Kiran Girdhar
S. Akbarian
Naguib Mechawar
Matthew Suderman
Corina Nagy
Gustavo Turecki
Understanding In-Context Learning of Linear Models in Transformers Through an Adversarial Lens
Usman Anwar
Johannes Von Oswald
Louis Kirsch
Spencer Frei
In this work, we make two contributions towards understanding of in-context learning of linear models by transformers. First, we investigate… (see more) the adversarial robustness of in-context learning in transformers to hijacking attacks — a type of adversarial attacks in which the adversary’s goal is to manipulate the prompt to force the transformer to generate a specific output. We show that both linear transformers and transformers with GPT-2 architectures are vulnerable to such hijacking attacks. However, adversarial robustness to such attacks can be significantly improved through adversarial training --- done either at the pretraining or finetuning stage --- and can generalize to stronger attack models. Our second main contribution is a comparative analysis of adversarial vulnerabilities across transformer models and other algorithms for learning linear models. This reveals two novel findings. First, adversarial attacks transfer poorly between larger transformer models trained from different seeds despite achieving similar in-distribution performance. This suggests that transformers of the same architecture trained according to the same recipe may implement different in-context learning algorithms for the same task. Second, we observe that attacks do not transfer well between classical learning algorithms for linear models (single-step gradient descent and ordinary least squares) and transformers. This suggests that there could be qualitative differences between the in-context learning algorithms that transformers implement and these traditional algorithms.
Understanding In-Context Learning of Linear Models in Transformers Through an Adversarial Lens
Usman Anwar
Johannes Von Oswald
Louis Kirsch
Spencer Frei
In this work, we make two contributions towards understanding of in-context learning of linear models by transformers. First, we investigate… (see more) the adversarial robustness of in-context learning in transformers to hijacking attacks — a type of adversarial attacks in which the adversary’s goal is to manipulate the prompt to force the transformer to generate a specific output. We show that both linear transformers and transformers with GPT-2 architectures are vulnerable to such hijacking attacks. However, adversarial robustness to such attacks can be significantly improved through adversarial training --- done either at the pretraining or finetuning stage --- and can generalize to stronger attack models. Our second main contribution is a comparative analysis of adversarial vulnerabilities across transformer models and other algorithms for learning linear models. This reveals two novel findings. First, adversarial attacks transfer poorly between larger transformer models trained from different seeds despite achieving similar in-distribution performance. This suggests that transformers of the same architecture trained according to the same recipe may implement different in-context learning algorithms for the same task. Second, we observe that attacks do not transfer well between classical learning algorithms for linear models (single-step gradient descent and ordinary least squares) and transformers. This suggests that there could be qualitative differences between the in-context learning algorithms that transformers implement and these traditional algorithms.