Portrait of Stefan Bauer

Stefan Bauer

Independent visiting researcher
Supervisor
Co-supervisor
Research Topics
Causality
Representation Learning

Publications

A scalable gene network model of regulatory dynamics in single cells
Joseph D Viviano
Alejandro Tejada-Lapuerta
Weixu Wang
Fabian J. Theis
A scalable gene network model of regulatory dynamics in single cells
Joseph D Viviano
Alejandro Tejada-Lapuerta
Weixu Wang
Fabian J. Theis
Learning Decision Trees as Amortized Structure Inference
Mohammed Mahfoud
Ghait Boukachab
Michał Koziarski
Alex Hernandez-Garcia
Learning Decision Trees as Amortized Structure Inference
Mohammed Mahfoud
Ghait Boukachab
Michał Koziarski
Alex Hernandez-Garcia
Causal machine learning for single-cell genomics
Alejandro Tejada-Lapuerta
Hananeh Aliee
Fabian J. Theis
Neural Causal Structure Discovery from Interventions
Nan Rosemary Ke
Anirudh Goyal
Bernhard Schölkopf
Michael Curtis Mozer
Recent promising results have generated a surge of interest in continuous optimization methods for causal discovery from observational data.… (see more) However, there are theoretical limitations on the identifiability of underlying structures obtained solely from observational data. Interventional data, on the other hand, provides richer information about the underlying data-generating process. Nevertheless, extending and applying methods designed for observational data to include interventions is a challenging problem. To address this issue, we propose a general framework based on neural networks to develop models that incorporate both observational and interventional data. Notably, our method can handle the challenging and realistic scenario where the identity of the intervened upon variable is unknown. We evaluate our proposed approach in the context of graph recovery, both de novo and from a partially-known edge set. Our method achieves strong benchmark results on various structure learning tasks, including structure recovery of synthetic graphs as well as standard graphs from the Bayesian Network Repository.
Benchmarking Bayesian Causal Discovery Methods for Downstream Treatment Effect Estimation
The practical utility of causality in decision-making is widespread and brought about by the intertwining of causal discovery and causal inf… (see more)erence. Nevertheless, a notable gap exists in the evaluation of causal discovery methods, where insufficient emphasis is placed on downstream inference. To address this gap, we evaluate seven established baseline causal discovery methods including a newly proposed method based on GFlowNets, on the downstream task of treatment effect estimation. Through the implementation of a distribution-level evaluation, we offer valuable and unique insights into the efficacy of these causal discovery methods for treatment effect estimation, considering both synthetic and real-world scenarios, as well as low-data scenarios. The results of our study demonstrate that some of the algorithms studied are able to effectively capture a wide range of useful and diverse ATE modes, while some tend to learn many low-probability modes which impacts the (unrelaxed) recall and precision.
Benchmarking Bayesian Causal Discovery Methods for Downstream Treatment Effect Estimation
Neural Causal Structure Discovery from Interventions
Nan Rosemary Ke
Anirudh Goyal
Bernhard Schölkopf
Michael Curtis Mozer
Recent promising results have generated a surge of interest in continuous optimization methods for causal discovery from observational data.… (see more) However, there are theoretical limitations on the identifiability of underlying structures obtained solely from observational data. Interventional data, on the other hand, provides richer information about the underlying data-generating process. Nevertheless, extending and applying methods designed for observational data to include interventions is a challenging problem. To address this issue, we propose a general framework based on neural networks to develop models that incorporate both observational and interventional data. Notably, our method can handle the challenging and realistic scenario where the identity of the intervened upon variable is unknown. We evaluate our proposed approach in the context of graph recovery, both de novo and from a partially-known edge set. Our method achieves strong benchmark results on various structure learning tasks, including structure recovery of synthetic graphs as well as standard graphs from the Bayesian Network Repository
Learning Latent Structural Causal Models
Jithendaraa Subramanian
Yashas Annadani
Ivaxi Sheth
Nan Rosemary Ke
Causal learning has long concerned itself with the accurate recovery of underlying causal mechanisms. Such causal modelling enables better e… (see more)xplanations of out-of-distribution data. Prior works on causal learning assume that the high-level causal variables are given. However, in machine learning tasks, one often operates on low-level data like image pixels or high-dimensional vectors. In such settings, the entire Structural Causal Model (SCM) -- structure, parameters, \textit{and} high-level causal variables -- is unobserved and needs to be learnt from low-level data. We treat this problem as Bayesian inference of the latent SCM, given low-level data. For linear Gaussian additive noise SCMs, we present a tractable approximate inference method which performs joint inference over the causal variables, structure and parameters of the latent SCM from random, known interventions. Experiments are performed on synthetic datasets and a causally generated image dataset to demonstrate the efficacy of our approach. We also perform image generation from unseen interventions, thereby verifying out of distribution generalization for the proposed causal model.
On the Generalization and Adaption Performance of Causal Models
Nino Scherrer
Anirudh Goyal
Nan Rosemary Ke
Bayesian Structure Learning with Generative Flow Networks
In Bayesian structure learning, we are interested in inferring a distribution over the directed acyclic graph (DAG) structure of Bayesian ne… (see more)tworks, from data. Defining such a distribution is very challenging, due to the combinatorially large sample space, and approximations based on MCMC are often required. Recently, a novel class of probabilistic models, called Generative Flow Networks (GFlowNets), have been introduced as a general framework for generative modeling of discrete and composite objects, such as graphs. In this work, we propose to use a GFlowNet as an alternative to MCMC for approximating the posterior distribution over the structure of Bayesian networks, given a dataset of observations. Generating a sample DAG from this approximate distribution is viewed as a sequential decision problem, where the graph is constructed one edge at a time, based on learned transition probabilities. Through evaluation on both simulated and real data, we show that our approach, called DAG-GFlowNet, provides an accurate approximation of the posterior over DAGs, and it compares favorably against other methods based on MCMC or variational inference.