Publications

Naming Autism in the Right Context
Andres Roman-Urrestarazu
Varun Warrier
R-MelNet: Reduced Mel-Spectral Modeling for Neural TTS
A guided multiverse study of neuroimaging analyses
Jessica Dafflon
Pedro F. da Costa
František Váša
Ricardo Pio Monti
Peter J. Hellyer
Federico Turkheimer
Jonathan Smallwood
Emily Jones
Robert Leech
For most neuroimaging questions the range of possible analytic choices makes it unclear how to evaluate conclusions from any single analytic… (voir plus) method. One possible way to address this issue is to evaluate all possible analyses using a multiverse approach, however, this can be computationally challenging and sequential analyses on the same data can compromise predictive power. Here, we establish how active learning on a low-dimensional space capturing the inter-relationships between pipelines can efficiently approximate the full spectrum of analyses. This approach balances the benefits of a multiverse analysis without incurring the cost on computational and predictive power. We illustrate this approach with two functional MRI datasets (predicting brain age and autism diagnosis) demonstrating how a multiverse of analyses can be efficiently navigated and mapped out using active learning. Furthermore, our presented approach not only identifies the subset of analysis techniques that are best able to predict age or classify individuals with autism spectrum disorder and healthy controls, but it also allows the relationships between analyses to be quantified.
Integrating Equity, Diversity, and Inclusion throughout the lifecycle of Artificial Intelligence in health
Milka Nyariro
Elham Emami
Samira Abbasgholizadeh Rahimi
Annotation Cost-Sensitive Deep Active Learning with Limited Data (Student Abstract)
Biological Sequence Design with GFlowNets
Alex-Hernandez Garcia
Bonaventure F. P. Dossou
Chanakya Ekbote
Michael Kilgour
Payel Das
Design of de novo biological sequences with desired properties, like protein and DNA sequences, often involves an active loop with several r… (voir plus)ounds of molecule ideation and expensive wet-lab evaluations. These experiments can consist of multiple stages, with increasing levels of precision and cost of evaluation, where candidates are filtered. This makes the diversity of proposed candidates a key consideration in the ideation phase. In this work, we propose an active learning algorithm leveraging epistemic uncertainty estimation and the recently proposed GFlowNets as a generator of diverse candidate solutions, with the objective to obtain a diverse batch of useful (as defined by some utility function, for example, the predicted anti-microbial activity of a peptide) and informative candidates after each round. We also propose a scheme to incorporate existing labeled datasets of candidates, in addition to a reward function, to speed up learning in GFlowNets. We present empirical results on several biological sequence design tasks, and we find that our method generates more diverse and novel batches with high scoring candidates compared to existing approaches.
Building Robust Ensembles via Margin Boosting
Hongyang R. Zhang
Pradeep Ravikumar
Arun Sai Suggala
In the context of adversarial robustness, a single model does not usually have enough power to defend against all possible adversarial attac… (voir plus)ks, and as a result, has sub-optimal robustness. Consequently, an emerging line of work has focused on learning an ensemble of neural networks to defend against adversarial attacks. In this work, we take a principled approach towards building robust ensembles. We view this problem from the perspective of margin-boosting and develop an algorithm for learning an ensemble with maximum margin. Through extensive empirical evaluation on benchmark datasets, we show that our algorithm not only outperforms existing ensembling techniques, but also large models trained in an end-to-end fashion. An important byproduct of our work is a margin-maximizing cross-entropy (MCE) loss, which is a better alternative to the standard cross-entropy (CE) loss. Empirically, we show that replacing the CE loss in state-of-the-art adversarial training techniques with our MCE loss leads to significant performance improvement.
Direct Behavior Specification via Constrained Reinforcement Learning
Christopher Pal
Christopher Pal
The standard formulation of Reinforcement Learning lacks a practical way of specifying what are admissible and forbidden behaviors. Most oft… (voir plus)en, practitioners go about the task of behavior specification by manually engineering the reward function, a counter-intuitive process that requires several iterations and is prone to reward hacking by the agent. In this work, we argue that constrained RL, which has almost exclusively been used for safe RL, also has the potential to significantly reduce the amount of work spent for reward specification in applied RL projects. To this end, we propose to specify behavioral preferences in the CMDP framework and to use Lagrangian methods to automatically weigh each of these behavioral constraints. Specifically, we investigate how CMDPs can be adapted to solve goal-based tasks while adhering to several constraints simultaneously. We evaluate this framework on a set of continuous control tasks relevant to the application of Reinforcement Learning for NPC design in video games.
Distributional Hamilton-Jacobi-Bellman Equations for Continuous-Time Reinforcement Learning
Bellemare Marc-Emmanuel
Continuous-time reinforcement learning offers an appealing formalism for describing control problems in which the passage of time is not nat… (voir plus)urally divided into discrete increments. Here we consider the problem of predicting the distribution of returns obtained by an agent interacting in a continuous-time, stochastic environment. Accurate return predictions have proven useful for determining optimal policies for risk-sensitive control, learning state representations, multiagent coordination, and more. We begin by establishing the distributional analogue of the Hamilton-Jacobi-Bellman (HJB) equation for Itô diffusions and the broader class of Feller-Dynkin processes. We then specialize this equation to the setting in which the return distribution is approximated by
EqR: Equivariant Representations for Data-Efficient Reinforcement Learning
Estimating Social Influence from Observational Data
Caterina De Bacco
David Blei
We consider the problem of estimating social influence, the effect that a person's behavior has on the future behavior of their peers. The k… (voir plus)ey challenge is that shared behavior between friends could be equally explained by influence or by two other confounding factors: 1) latent traits that caused people to both become friends and engage in the behavior, and 2) latent preferences for the behavior. This paper addresses the challenges of estimating social influence with three contributions. First, we formalize social influence as a causal effect, one which requires inferences about hypothetical interventions. Second, we develop Poisson Influence Factorization (PIF), a method for estimating social influence from observational data. PIF fits probabilistic factor models to networks and behavior data to infer variables that serve as substitutes for the confounding latent traits. Third, we develop assumptions under which PIF recovers estimates of social influence. We empirically study PIF with semi-synthetic and real data from Last.fm, and conduct a sensitivity analysis. We find that PIF estimates social influence most accurately compared to related methods and remains robust under some violations of its assumptions.
Fair Representation Learning through Implicit Path Alignment
Qi CHEN
Jiaqi Li
Boyu Wang