Seven pillars of precision digital health and medicine
Arash Shaban-Nejad
Martin Michalowski
Niels Peek
John S. Brownstein
On the Morality of Artificial Intelligence
Alexandra Luccioni
Examines ethical principles and guidelines that surround machine learning and artificial intelligence.
On Catastrophic Interference in Atari 2600 Games
William Fedus
Dibya Ghosh
John D. Martin
Model-free deep reinforcement learning is sample inefficient. One hypothesis -- speculated, but not confirmed -- is that catastrophic interf… (voir plus)erence within an environment inhibits learning. We test this hypothesis through a large-scale empirical study in the Arcade Learning Environment (ALE) and, indeed, find supporting evidence. We show that interference causes performance to plateau; the network cannot train on segments beyond the plateau without degrading the policy used to reach there. By synthetically controlling for interference, we demonstrate performance boosts across architectures, learning algorithms and environments. A more refined analysis shows that learning one segment of a game often increases prediction errors elsewhere. Our study provides a clear empirical link between catastrophic interference and sample efficiency in reinforcement learning.
Machine learning analysis of exome trios to contrast the genomic architecture of autism and schizophrenia
Sameer Sardaar
Bill Qi
Alexandre Dionne-Laporte
Guy. A. Rouleau
Policy Evaluation Networks
Jean Harb
Tom Schaul
Solving ODE with Universal Flows: Approximation Theory for Flow-Based Models
Chin-Wei Huang
Laurent Dinh
Normalizing flows are powerful invertible probabilistic models that can be used to translate two probability distributions, in a way that al… (voir plus)lows us to efficiently track the change of probability density. However, to trade for computational efficiency in sampling and in evaluating the log-density, special parameterization designs have been proposed at the cost of representational expressiveness. In this work, we propose to use ODEs as a framework to establish universal approximation theory for certain families of flow-based models.
Analysing brain networks in population neuroscience: a case for the Bayesian philosophy
Dorothea L. Floris
Andre Marquand
Network connectivity fingerprints are among today's best choices to obtain a faithful sampling of an individual's brain and cognition. Widel… (voir plus)y available MRI scanners can provide rich information tapping into network recruitment and reconfiguration that now scales to hundreds and thousands of humans. Here, we contemplate the advantages of analysing such connectome profiles using Bayesian strategies. These analysis techniques afford full probability estimates of the studied network coupling phenomena, provide analytical machinery to separate epistemological uncertainty and biological variability in a coherent manner, usher us towards avenues to go beyond binary statements on existence versus non-existence of an effect, and afford credibility estimates around all model parameters at play which thus enable single-subject predictions with rigorous uncertainty intervals. We illustrate the brittle boundary between healthy and diseased brain circuits by autism spectrum disorder as a recurring theme where, we argue, network-based approaches in neuroscience will require careful probabilistic answers. This article is part of the theme issue ‘Unifying the essential concepts of biological networks: biological insights and philosophical foundations’.
Stochastic Polyak Step-size for SGD: An Adaptive Learning Rate for Fast Convergence
Nicolas Loizou
Sharan Vaswani
Issam Hadj Laradji
We propose a stochastic variant of the classical Polyak step-size (Polyak, 1987) commonly used in the subgradient method. Although computing… (voir plus) the Polyak step-size requires knowledge of the optimal function values, this information is readily available for typical modern machine learning applications. Consequently, the proposed stochastic Polyak step-size (SPS) is an attractive choice for setting the learning rate for stochastic gradient descent (SGD). We provide theoretical convergence guarantees for SGD equipped with SPS in different settings, including strongly convex, convex and non-convex functions. Furthermore, our analysis results in novel convergence guarantees for SGD with a constant step-size. We show that SPS is particularly effective when training over-parameterized models capable of interpolating the training data. In this setting, we prove that SPS enables SGD to converge to the true solution at a fast rate without requiring the knowledge of any problem-dependent constants or additional computational overhead. We experimentally validate our theoretical results via extensive experiments on synthetic and real datasets. We demonstrate the strong performance of SGD with SPS compared to state-of-the-art optimization methods when training over-parameterized models.
Neural Bayes: A Generic Parameterization Method for Unsupervised Representation Learning
Devansh Arpit
Huan Wang
Caiming Xiong
Richard Socher
Curriculum in Gradient-Based Meta-Reinforcement Learning
Bhairav Mehta
Tristan Deleu
Sharath Chandra Raparthy
Gradient-based meta-learners such as Model-Agnostic Meta-Learning (MAML) have shown strong few-shot performance in supervised and reinforcem… (voir plus)ent learning settings. However, specifically in the case of meta-reinforcement learning (meta-RL), we can show that gradient-based meta-learners are sensitive to task distributions. With the wrong curriculum, agents suffer the effects of meta-overfitting, shallow adaptation, and adaptation instability. In this work, we begin by highlighting intriguing failure cases of gradient-based meta-RL and show that task distributions can wildly affect algorithmic outputs, stability, and performance. To address this problem, we leverage insights from recent literature on domain randomization and propose meta Active Domain Randomization (meta-ADR), which learns a curriculum of tasks for gradient-based meta-RL in a similar as ADR does for sim2real transfer. We show that this approach induces more stable policies on a variety of simulated locomotion and navigation tasks. We assess in- and out-of-distribution generalization and find that the learned task distributions, even in an unstructured task space, greatly improve the adaptation performance of MAML. Finally, we motivate the need for better benchmarking in meta-RL that prioritizes \textit{generalization} over single-task adaption performance.
The Geometry of Sign Gradient Descent
Lukas Balles
Fabian Pedregosa
Augmented Normalizing Flows: Bridging the Gap Between Generative Flows and Latent Variable Models
Chin-Wei Huang
Laurent Dinh
In this work, we propose a new family of generative flows on an augmented data space, with an aim to improve expressivity without drasticall… (voir plus)y increasing the computational cost of sampling and evaluation of a lower bound on the likelihood. Theoretically, we prove the proposed flow can approximate a Hamiltonian ODE as a universal transport map. Empirically, we demonstrate state-of-the-art performance on standard benchmarks of flow-based generative modeling.