Publications

Delta-AI: Local objectives for amortized inference in sparse graphical models
Jean-Pierre R. Falet
Hae Beom Lee
Nikolay Malkin
Chen Sun
Dragos Secrieru
Dinghuai Zhang
We present a new algorithm for amortized inference in sparse probabilistic graphical models (PGMs), which we call …
Diffusion Generative Flow Samplers: Improving learning signals through partial trajectory optimization
Dinghuai Zhang
Ricky T. Q. Chen
Cheng-Hao Liu
On Diffusion Modeling for Anomaly Detection
Victor Livernoche
Vineet Jain
Known for their impressive performance in generative modeling, diffusion models are attractive candidates for density-based anomaly detectio… (voir plus)n. This paper investigates different variations of diffusion modeling for unsupervised and semi-supervised anomaly detection. In particular, we find that Denoising Diffusion Probability Models (DDPM) are performant on anomaly detection benchmarks yet computationally expensive. By simplifying DDPM in application to anomaly detection, we are naturally led to an alternative approach called Diffusion Time Estimation (DTE). DTE estimates the distribution over diffusion time for a given input and uses the mode or mean of this distribution as the anomaly score. We derive an analytical form for this density and leverage a deep neural network to improve inference efficiency. Through empirical evaluations on the ADBench benchmark, we demonstrate that all diffusion-based anomaly detection methods perform competitively for both semi-supervised and unsupervised settings. Notably, DTE achieves orders of magnitude faster inference time than DDPM, while outperforming it on this benchmark. These results establish diffusion-based anomaly detection as a scalable alternative to traditional methods and recent deep-learning techniques for standard unsupervised and semi-supervised anomaly detection settings.
Efficient Dynamics Modeling in Interactive Environments with Koopman Theory
Arnab Kumar Mondal
Siba Smarak Panigrahi
Sai Rajeswar
The accurate modeling of dynamics in interactive environments is critical for successful long-range prediction. Such a capability could adva… (voir plus)nce Reinforcement Learning (RL) and Planning algorithms, but achieving it is challenging. Inaccuracies in model estimates can compound, resulting in increased errors over long horizons. We approach this problem from the lens of Koopman theory, where the nonlinear dynamics of the environment can be linearized in a high-dimensional latent space. This allows us to efficiently parallelize the sequential problem of long-range prediction using convolution while accounting for the agent’s action at every time step. Our approach also enables stability analysis and better control over gradients through time. Taken together, these advantages result in significant improvement over the existing approaches, both in the efficiency and the accuracy of modeling dynamics over extended horizons. We also show that this model can be easily incorporated into dynamics modeling for model-based planning and model-free RL and report promising experimental results.
Empirical Analysis of Model Selection for Heterogeneous Causal Effect Estimation
Divyat Mahajan
Brady Neal
Vasilis Syrgkanis
Ensemble Distillation for Unsupervised Constituency Parsing
Behzad Shayegh
Yanshuai Cao
Xiaodan Zhu
Lili Mou
Evaluating Representation Learning on the Protein Structure Universe
Arian Rokkum Jamasb
Alex Morehead
Chaitanya K. Joshi
Zuobai Zhang
Kieran Didi
Simon V Mathis
Charles Harris
Jianlin Cheng
Pietro Lio
Tom Leon Blundell
Expected flow networks in stochastic environments and two-player zero-sum games
Marco Jiralerspong
Bilun Sun
Danilo Vucetic
Tianyu Zhang
Nikolay Malkin
Ghost on the Shell: An Expressive Representation of General 3D Shapes
Zhen Liu
Yao Feng
Yuliang Xiu
Weiyang Liu
Michael J. Black
Bernhard Schölkopf
Hallucination Detection and Hallucination Mitigation: An Investigation
Junliang Luo
Tianyu Li
Di Wu
M. Jenkin
Steve Liu
How connectivity structure shapes rich and lazy learning in neural circuits
Yuhan Helena Liu
Aristide Baratin
Jonathan Cornford
Stefan Mihalas
Eric Todd SheaBrown
In theoretical neuroscience, recent work leverages deep learning tools to explore how some network attributes critically influence its learn… (voir plus)ing dynamics. Notably, initial weight distributions with small (resp. large) variance may yield a rich (resp. lazy) regime, where significant (resp. minor) changes to network states and representation are observed over the course of learning. However, in biology, neural circuit connectivity generally has a low-rank structure and therefore differs markedly from the random initializations generally used for these studies. As such, here we investigate how the structure of the initial weights — in particular their effective rank — influences the network learning regime. Through both empirical and theoretical analyses, we discover that high-rank initializations typically yield smaller network changes indicative of lazier learning, a finding we also confirm with experimentally-driven initial connectivity in recurrent neural networks. Conversely, low-rank initialization biases learning towards richer learning. Importantly, however, as an exception to this rule, we find lazier learning can still occur with a low-rank initialization that aligns with task and data statistics. Our research highlights the pivotal role of initial weight structures in shaping learning regimes, with implications for metabolic costs of plasticity and risks of catastrophic forgetting.
Improving Intrinsic Exploration by Creating Stationary Objectives
Roger Creus Castanyer
Joshua Romoff