Publications

What makes a theory of consciousness unscientific?
Derek H. Mark G. Tristan A. Yoshua James W. Jacob Dean D Arnold Baxter Bekinschtein Bengio Bisley Browning
Derek H. Arnold
Mark G. Baxter
Tristan A. Bekinschtein
James W. Bisley
Jacob Browning
Dean Buonomano
David Carmel
Marisa Carrasco
Peter Carruthers
Olivia Carter
Dorita H. F. Chang
Mouslim Cherkaoui
Axel Cleeremans
Michael A. Cohen
Philip R. Corlett
Kalina Christoff
Sam Cumming … (see 84 more)
Cody A. Cushing
Beatrice de Gelder
Felipe De Brigard
Daniel C. Dennett
Nadine Dijkstra
Adrien Doerig
Paul E. Dux
Stephen M. Fleming
Keith Frankish
Chris D. Frith
Sarah Garfinkel
Melvyn A. Goodale
Jacqueline Gottlieb
Jake R. Hanson
Ran R. Hassin
Michael H. Herzog
Cecilia Heyes
Po-Jang Hsieh
Shao-Min Hung
Robert Kentridge
Tomas Knapen
Nikos Konstantinou
Konrad Kording
Timo L. Kvamme
Sze Chai Kwok
Renzo C. Lanfranco
Hakwan Lau
Joseph LeDoux
Alan L. F. Lee
Camilo Libedinsky
Matthew D. Lieberman
Ying-Tung Lin
Ka-Yuet Liu
Maro G. Machizawa
Julio Martinez-Trujillo
Janet Metcalfe
Matthias Michel
Kenneth D. Miller
Partha P. Mitra
Dean Mobbs
Robert M. Mok
Jorge Morales
Myrto Mylopoulos
Brian Odegaard
Charles C.-F. Or
Adrian M. Owen
David Pereplyotchik
Franco Pestilli
Megan A. K. Peters
Ian Phillips
Rosanne L. Rademaker
Dobromir Rahnev
Geraint Rees
Dario L. Ringach
Adina Roskies
Daniela Schiller
Aaron Schurger
D. Samuel Schwarzkopf
Ryan B. Scott
Aaron R. Seitz
Joshua Shepherd
Juha Silvanto
Heleen A. Slagter
Barry C. Smith
Guillermo Solovey
David Soto
Hugo Spiers
Timo Stein
Frank Tong
Peter U. Tse
Jonas Vibell
Sebastian Watzl
Josh Weisberg
Thalia Wheatley
Michael H. Herzog
Martijn E. Wokke
Hakwan Lau
Michał Klincewicz
Tony Cheng
Michael Schmitz
Miguel Ángel Sebastián
Joel S. Snyder
NNetscape Navigator: Complex Demonstrations for Web Agents Without a Demonstrator
Shikhar Murty
Hao Zhu
Christopher D Manning
We introduce NNetscape Navigator (NNetnav), a method for training web agents entirely through synthetic demonstrations. These demonstrations… (see more) are collected by first interacting with a browser to generate trajectory rollouts, which are then retroactively labeled into instructions using a language model. Most work on training browser agents has relied on expensive human supervision, and the limited previous work on such interaction-first synthetic data techniques has failed to provide effective search through the exponential space of exploration. In contrast, NNetnav exploits the hierarchical structure of language instructions to make this search more tractable: complex instructions are typically decomposable into simpler subtasks, allowing NNetnav to automatically prune interaction episodes when an intermediate trajectory cannot be annotated with a meaningful sub-task. We use NNetnav demonstrations from a language model for supervised fine-tuning of a smaller language model policy, and find improvements of 6 points on WebArena and over 20 points on MiniWoB++, two popular environments for web-agents. Notably, on WebArena, we observe that language model policies can be further enhanced when fine-tuned with NNetnav demonstrations derived from the same language model. Finally, we collect and release a dataset of over 6k NNetnav demonstrations on WebArena, spanning a diverse and complex set of instructions.
Towards Graph Foundation Models: A Study on the Generalization of Positional and Structural Encodings
Billy Joe Franks
Moshe Eliasof
Carola-Bibiane Schönlieb
Sophie Fellenz
Marius Kloft
Recent advances in integrating positional and structural encodings (PSEs) into graph neural networks (GNNs) have significantly enhanced thei… (see more)r performance across various graph learning tasks. However, the general applicability of these encodings and their potential to serve as foundational representations for graphs remain uncertain. This paper investigates the fine-tuning efficiency, scalability with sample size, and generalization capability of learnable PSEs across diverse graph datasets. Specifically, we evaluate their potential as universal pre-trained models that can be easily adapted to new tasks with minimal fine-tuning and limited data. Furthermore, we assess the expressivity of the learned representations, particularly, when used to augment downstream GNNs. We demonstrate through extensive benchmarking and empirical analysis that PSEs generally enhance downstream models. However, some datasets may require specific PSE-augmentations to achieve optimal performance. Nevertheless, our findings highlight their significant potential to become integral components of future graph foundation models. We provide new insights into the strengths and limitations of PSEs, contributing to the broader discourse on foundation models in graph learning.
Cross-validation for training and testing co-occurrence network inference algorithms
Daniel Agyapong
Jeffrey Ryan Propster
Jane Marks
DASFormer: self-supervised pretraining for earthquake monitoring
Zhichao Shen
Weiqiang Zhu
Earthquake monitoring is a fundamental task to unravel the underlying physics of earthquakes and mitigate associated hazards for public safe… (see more)ty. Distributed acoustic sensing, or DAS, which transforms pre-existing telecommunication cables into ultra-dense seismic networks, offers a cost-effective and scalable solution for next-generation earthquake monitoring. However, current approaches for earthquake monitoring like PhaseNet and PhaseNet-2 primarily rely on supervised learning, while manually labeled DAS data is quite limited and it is difficult to obtain more annotated datasets. In this paper, we present DASFormer, a novel self-supervised pretraining technique on DAS data with a coarse-to-fine framework that models spatial-temporal signal correlation. We treat earthquake monitoring as an anomaly detection task and demonstrate DASFormer can be directly utilized as a seismic phase detector. Experimental results demonstrate that DASFormer is effective in terms of several evaluation metrics and outperforms state-of-the-art time-series forecasting, anomaly detection, and foundation models on the unsupervised seismic detection task. We also demonstrate the potential of fine-tuning DASFormer to downstream tasks through case studies.
EMA-Net: Efficient Multitask Affinity Learning for Dense Scene Predictions
GradTune: Last-layer Fine-tuning for Group Robustness Without Group Annotation
Patrik Joslin Kenfack
Ulrich Matchi Aïvodji
S Ebrahimi Kahou
This work addresses the limitations of deep neural networks (DNNs) in generalizing beyond training data due to spurious correlations. Recent… (see more) research has demonstrated that models trained with empirical risk minimization learn both core and spurious features, often upweighting spurious ones in the final classification, which can frequently lead to poor performance on minority groups. Deep Feature Reweighting alleviates this issue by retraining the model's last classification layer using a group-balanced held-out validation set. However, relying on spurious feature labels during training or validation limits practical application, as spurious features are not always known or costly to annotate. Our preliminary experiments reveal that ERM-trained models exhibit higher gradient norms on minority group samples in the hold-out dataset. Leveraging these insights, we propose an alternative approach called GradTune, which fine-tunes the last classification layer using high-gradient norm samples. Our results on four well-established benchmarks demonstrate that the proposed method can achieve competitive performance compared to existing methods without requiring group labels during training or validation.
Graph-Jigsaw Conditioned Diffusion Model for Skeleton-based Video Anomaly Detection
Thi Kieu Khanh Ho
A Joint Space-Time Encoder for Geographic Time-Series Data
Konstantin Klemmer
Mélisande Teng
Many real-world processes are characterized by complex spatio-temporal dependencies, from climate dynamics to disease spread. Here, we intro… (see more)duce a new neural network architecture to model such dynamics at scale: the \emph{Space-Time Encoder}. Building on recent advances in \emph{location encoders}, models that take as inputs geographic coordinates, we develop a method that takes in geographic and temporal information simultaneously and learns smooth, continuous functions in both space and time. The inputs are first transformed using positional encoding functions and then fed into neural networks that allow the learning of complex functions. We implement a prototype of the \emph{Space-Time Encoder}, discuss the design choices of the novel temporal encoding, and demonstrate its utility in climate model emulation. We discuss the potential of the method across use cases, as well as promising avenues for further methodological innovation.
Mixed Patch Visible-Infrared Modality Agnostic Object Detection
Heitor Rapela Medeiros
David Latortue
Eric Granger
In real-world scenarios, using multiple modalities like visible (RGB) and infrared (IR) can greatly improve the performance of a predictive … (see more)task such as object detection (OD). Multimodal learning is a common way to leverage these modalities, where multiple modality-specific encoders and a fusion module are used to improve performance. In this paper, we tackle a different way to employ RGB and IR modalities, where only one modality or the other is observed by a single shared vision encoder. This realistic setting requires a lower memory footprint and is more suitable for applications such as autonomous driving and surveillance, which commonly rely on RGB and IR data. However, when learning a single encoder on multiple modalities, one modality can dominate the other, producing un-even recognition results. This work investigates how to efficiently leverage RGB and IR modalities to train a common transformer-based OD vision encoder while countering the effects of modality imbalance. For this, we introduce a novel training technique to Mix Patches (MiPa)from the two modalities, in conjunction with a patch-wise modality agnostic module, for learning a common representation of both modalities. Our experiments show that MiPa can learn a representation to reach competitive results on traditional RGB/IR benchmarks while only requiring a single modality during inference. Our code is available at: https://github.com/heitorrapela/MiPa.
A Realistic Protocol for Evaluation of Weakly Supervised Object Localization
Shakeeb Murtaza
Soufiane Belharbi
Eric Granger
Weakly Supervised Object Localization (WSOL) allows training deep learning models for classification and localization (LOC) using only globa… (see more)l class-level labels. The absence of bounding box (bbox) supervision during training raises challenges in the literature for hyper-parameter tuning, model selection, and evaluation. WSOL methods rely on a validation set with bbox annotations for model selection, and a test set with bbox annotations for threshold estimation for producing bboxes from localization maps. This approach, however, is not aligned with the WSOL setting as these annotations are typically unavailable in real-world scenarios. Our initial empirical analysis shows a significant decline in LOC performance when model selection and threshold estimation rely solely on class labels and the image itself, respectively, compared to using manual bbox annotations. This highlights the importance of incorporating bbox labels for optimal model performance. In this paper, a new WSOL evaluation protocol is proposed that provides LOC information without the need for manual bbox annotations. In particular, we generated noisy pseudo-boxes from a pretrained off-the-shelf region proposal method such as Selective Search, CLIP, and RPN for model selection. These bboxes are also employed to estimate the threshold from LOC maps, circumventing the need for test-set bbox annotations. Our experiments with several WSOL methods on ILSVRC and CUB datasets show that using the proposed pseudo-bboxes for validation facilitates the model selection and threshold estimation, with LOC performance comparable to those selected using GT bboxes on the validation set and threshold estimation on the test set. It also outperforms models selected using class-level labels, and then dynamically thresholded based solely on LOC maps.
Shaping Inductive Bias in Diffusion Models through Frequency-Based Noise Control
Berton Earnshaw
Jason Hartford
Diffusion Probabilistic Models (DPMs) are powerful generative models that have achieved unparalleled success in a number of generative tasks… (see more). In this work, we aim to build inductive biases into the training and sampling of diffusion models to better accommodate the target distribution of the data to model. For topologically structured data, we devise a frequency-based noising operator to purposefully manipulate, and set, these inductive biases. We first show that appropriate manipulations of the noising forward process can lead DPMs to focus on particular aspects of the distribution to learn. We show that different datasets necessitate different inductive biases, and that appropriate frequency-based noise control induces increased generative performance compared to standard diffusion. Finally, we demonstrate the possibility of ignoring information at particular frequencies while learning. We show this in an image corruption and recovery task, where we train a DPM to recover the original target distribution after severe noise corruption.