Publications

Kernel-Level Event-Based Performance Anomaly Detection in Software Systems under Varying Load Conditions
Anthonia Njoku
Heng Li
Position: Humanity Faces Existential Risk from Gradual Disempowerment
Jan Kulveit
Raymond Douglas
Nora Ammann
Deger Turan
David M. Krueger
David Duvenaud
The Search for Squawk: Agile Modeling in Bioacoustics
Otilia Stretcu
Jenny Hamer
Lauren Harrell
Rob Laber
Amanda K. Navine
Patrick Hart
Ben Williams
Timothy A. C. Lamont
Tries B. Rasak
Mars Coral Restoration Team
Sheryn Brodie
Brendan Doohan
Philip Eichinski
Paul Roe
Lin Schwarzkopf
Tom Denton
"It was 80% me, 20% AI": Seeking Authenticity in Co-Writing with Large Language Models
Angel Hsing-Chi Hwang
Q. Vera Liao
A.R. Olteanu
Adam Trischler
Addressing Concept Mislabeling in Concept Bottleneck Models Through Preference Optimization
Tianyue H. Zhang
Mateo Espinosa Zarlenga
Concept Bottleneck Models (CBMs) propose to enhance the trustworthiness of AI systems by constraining their decisions on a set of human-unde… (voir plus)rstandable concepts. However, CBMs typically assume that datasets contain accurate concept labels-an assumption often violated in practice, which we show can significantly degrade performance (by 25% in some cases). To address this, we introduce the Concept Preference Optimization (CPO) objective, a new loss function based on Direct Preference Optimization, which effectively mitigates the negative impact of concept mislabeling on CBM performance. We provide an analysis of key properties of the CPO objective, showing it directly optimizes for the concept's posterior distribution, and contrast it against Binary Cross Entropy (BCE), demonstrating that CPO is inherently less sensitive to concept noise. We empirically confirm our analysis by finding that CPO consistently outperforms BCE on three real-world datasets, both with and without added label noise. We make our code available on Github.
ALAS: Measuring Latent Speech-Text Alignment For Spoken Language Understanding In Multimodal LLMs
Yingzhi Wang
Mirco Ravanaelli
Yusuf Cem Sübakan
Building spatial world models from sparse transitional episodic memories
Many animals possess a remarkable capacity to rapidly construct flexible mental models of their environments. These world models are crucial… (voir plus) for ethologically relevant behaviors such as navigation, exploration, and planning. The ability to form episodic memories and make inferences based on these sparse experiences is believed to underpin the efficiency and adaptability of these models in the brain. Here, we ask: Can a neural network learn to construct a spatial model of its surroundings from sparse and disjoint episodic memories? We formulate the problem in a simulated world and propose a novel framework, the Episodic Spatial World Model (ESWM), as a potential answer. We show that ESWM is highly sample-efficient, requiring minimal observations to construct a robust representation of the environment. It is also inherently adaptive, allowing for rapid updates when the environment changes. In addition, we demonstrate that ESWM readily enables near-optimal strategies for exploring novel environments and navigating between arbitrary points, all without the need for additional training.
Burst firing optimizes invariant coding of natural communication signals by electrosensory neural populations
Michael G. Metzen
Amin Akhshi
Anmar Khadra
Maurice J. Chacron
Accurate perception of objects within the environment independent of context is essential for the survival of an organism. While neurons tha… (voir plus)t respond in an invariant manner to different stimulus waveforms resulting from identitypreserving transformations of objects are thought to provide a neural correlate of context-independent perception, how such responses emerge in the brain remains poorly understood. Here, we demonstrate that burst firing in neural populations can give rise to an invariant representation of highly heterogeneous natural communication stimuli. Multi-unit recordings from central sensory neural populations showed that considering burst spike trains led to invariant representations at the population but not the single neuron level. Computational modeling further revealed that optimal invariance is achieved at burst firing levels seen experimentally. Taken together, our results demonstrate an important function for burst firing toward establishing invariant representations of sensory input in neural populations.
Context is Key: A Benchmark for Forecasting with Essential Textual Information
Andrew R. Williams
Étienne Marcotte
Valentina Zantedeschi
Alexandre Lacoste
Forecasting is a critical task in decision-making across numerous domains. While historical numerical data provide a start, they fail to con… (voir plus)vey the complete context for reliable and accurate predictions. Human forecasters frequently rely on additional information, such as background knowledge and constraints, which can efficiently be communicated through natural language. However, in spite of recent progress with LLM-based forecasters, their ability to effectively integrate this textual information remains an open question. To address this, we introduce "Context is Key" (CiK), a time-series forecasting benchmark that pairs numerical data with diverse types of carefully crafted textual context, requiring models to integrate both modalities; crucially, every task in CiK requires understanding textual context to be solved successfully. We evaluate a range of approaches, including statistical models, time series foundation models, and LLM-based forecasters, and propose a simple yet effective LLM prompting method that outperforms all other tested methods on our benchmark. Our experiments highlight the importance of incorporating contextual information, demonstrate surprising performance when using LLM-based forecasting models, and also reveal some of their critical shortcomings. This benchmark aims to advance multimodal forecasting by promoting models that are both accurate and accessible to decision-makers with varied technical expertise. The benchmark can be visualized at https://servicenow.github.io/context-is-key-forecasting/v0/.
Diminished social memory and hippocampal correlates of social interactions in chronic social defeat stress susceptibility
Amanda Larosa
Tian Rui Zhang
Alice S. Wong
Cyrus Y.H. Fung
Y. H. Fung Cyrus
Xiong Ling Yun (Jenny) Long
Prabhjeet Singh
Benjamin C. M. Fung
Tak Pan Wong
The susceptibility to chronic stress has been associated with depression, a mood disorder which highly implicates the hippocampus. Hippocamp… (voir plus)al contribution to stress susceptibility has been supported by findings in mice following chronic social defeat stress (CSDS). However, little is known of the role of hippocampal activity in determining the development of stress susceptibility. We used the UCLA miniscope to longitudinally measure the activity of dorsal CA1 hippocampal neurons across CSDS. Apart from examining the representation of social information by these neurons, we also compared social memory in mice that were susceptible or resilient to CSDS. We observed more stable dCA1 correlates of social interaction and social memory in CSDS resilience. Such changes were absent in CSDS susceptible mice and accompanied by greater social memory impairments. CSDS susceptibility may be supported by hippocampal social cognitive processes, reflected in diminished hippocampal representations of social information and a greater impairment in social memory.
Discovering Symbolic Cognitive Models from Human and Animal Behavior
Nenad Tomasev
Navodita Sharma
Rishika Mohanta
Aparna Dev
Kuba Perlin
Siddhant Jain
Kyle Levin
Noemi Elteto
Will Dabney
Alexander Novikov
Glenn C Turner
Maria K Eckstein
Nathaniel D. Daw
Kevin J Miller
Kim Stachenfeld
Symbolic models play a key role in cognitive science, expressing computationally precise hypotheses about how the brain implements a cogniti… (voir plus)ve process. Identifying an appropriate model typically requires a great deal of effort and ingenuity on the part of a human scientist. Here, we adapt FunSearch (Romera-Paredes et al. 2024), a recently developed tool that uses Large Language Models (LLMs) in an evolutionary algorithm, to automatically discover symbolic cognitive models that accurately capture human and animal behavior. We consider datasets from three species performing a classic reward-learning task that has been the focus of substantial modeling effort, and find that the discovered programs outperform state-of-the-art cognitive models for each. The discovered programs can readily be interpreted as hypotheses about human and animal cognition, instantiating interpretable symbolic learning and decision-making algorithms. Broadly, these results demonstrate the viability of using LLM-powered program synthesis to propose novel scientific hypotheses regarding mechanisms of human and animal cognition.
Does learning the right latent variables necessarily improve in-context learning?
Large autoregressive models like Transformers can solve tasks through in-context learning (ICL) without learning new weights, suggesting ave… (voir plus)nues for efficiently solving new tasks. For many tasks, e.g., linear regression, the data factorizes: examples are independent given a task latent that generates the data, e.g., linear coefficients. While an optimal predictor leverages this factorization by inferring task latents, it is unclear if Transformers implicitly do so or if they instead exploit heuristics and statistical shortcuts enabled by attention layers. Both scenarios have inspired active ongoing work. In this paper, we systematically investigate the effect of explicitly inferring task latents. We minimally modify the Transformer architecture with a bottleneck designed to prevent shortcuts in favor of more structured solutions, and then compare performance against standard Transformers across various ICL tasks. Contrary to intuition and some recent works, we find little discernible difference between the two; biasing towards task-relevant latent variables does not lead to better out-of-distribution performance, in general. Curiously, we find that while the bottleneck effectively learns to extract latent task variables from context, downstream processing struggles to utilize them for robust prediction. Our study highlights the intrinsic limitations of Transformers in achieving structured ICL solutions that generalize, and shows that while inferring the right latents aids interpretability, it is not sufficient to alleviate this problem.