The Cost of Untracked Diversity in Brain-Imaging Prediction
Oualid Benkarim
Casey Paquola
Bo-yong Park
Valeria Kebets
Seok-Jun Hong
Reinder Vos de Wael
Shaoshi Zhang
B.T. Thomas Yeo
Michael Eickenberg
Tian Ge
Jean-Baptiste Poline
Boris C Bernhardt
SPeCiaL: Self-Supervised Pretraining for Continual Learning
Lucas Caccia
Improving Continuous Normalizing Flows using a Multi-Resolution Framework
Vikram Voleti
Chris Finlay
Recent work has shown that Continuous Normalizing Flows (CNFs) can serve as generative models of images with exact likelihood calculation an… (see more)d invertible generation/density estimation. In this work we introduce a Multi-Resolution variant of such models (MRCNF). We introduce a transformation between resolutions that allows for no change in the log likelihood. We show that this approach yields comparable likelihood values for various image datasets, with improved performance at higher resolutions, with fewer parameters, using only 1 GPU.
Randomized Exploration for Reinforcement Learning with General Value Function Approximation
Haque Ishfaq
Qiwen Cui
Viet Huy Nguyen
Alex Ayoub
Zhuoran Yang
Zhaoran Wang
Lin F. Yang
We propose a model-free reinforcement learning algorithm inspired by the popular randomized least squares value iteration (RLSVI) algorithm … (see more)as well as the optimism principle. Unlike existing upper-confidence-bound (UCB) based approaches, which are often computationally intractable, our algorithm drives exploration by simply perturbing the training data with judiciously chosen i.i.d. scalar noises. To attain optimistic value function estimation without resorting to a UCB-style bonus, we introduce an optimistic reward sampling procedure. When the value functions can be represented by a function class
Variational Causal Networks: Approximate Bayesian Inference over Causal Structures
Yashas Annadani
Jonas Rothfuss
Alexandre Lacoste
Nino Scherrer
Anirudh Goyal
Stefan Bauer
Learning the causal structure that underlies data is a crucial step towards robust real-world decision making. The majority of existing work… (see more) in causal inference focuses on determining a single directed acyclic graph (DAG) or a Markov equivalence class thereof. However, a crucial aspect to acting intelligently upon the knowledge about causal structure which has been inferred from finite data demands reasoning about its uncertainty. For instance, planning interventions to find out more about the causal mechanisms that govern our data requires quantifying epistemic uncertainty over DAGs. While Bayesian causal inference allows to do so, the posterior over DAGs becomes intractable even for a small number of variables. Aiming to overcome this issue, we propose a form of variational inference over the graphs of Structural Causal Models (SCMs). To this end, we introduce a parametric variational family modelled by an autoregressive distribution over the space of discrete DAGs. Its number of parameters does not grow exponentially with the number of variables and can be tractably learned by maximising an Evidence Lower Bound (ELBO). In our experiments, we demonstrate that the proposed variational posterior is able to provide a good approximation of the true posterior.
Comparative Study of Learning Outcomes for Online Learning Platforms
Francois St-Hilaire
Nathan J. Burns
Robert Belfer
Muhammad Shayan
Ariella Smofsky
Dung D. Vu
Antoine Frau
Joseph Potochny
Farid Faraji
Vincent Pavero
Neroli Ko
Ansona Onyi Ching
Sabina Elkins
A. Stepanyan
Adela Matajova
Iulian V. Serban
Ekaterina Kochmar
Incorporating dynamic flight network in SEIR to model mobility between populations
Xiaoye Ding
Shenyang Huang
Abby Leung
RNN with Particle Flow for Probabilistic Spatio-temporal Forecasting
Soumyasundar Pal
Liheng Ma
Yingxue Zhang
M. Coates
Spatio-temporal forecasting has numerous applications in analyzing wireless, traffic, and financial networks. Many classical statistical mod… (see more)els often fall short in handling the complexity and high non-linearity present in time-series data. Recent advances in deep learning allow for better modelling of spatial and temporal dependencies. While most of these models focus on obtaining accurate point forecasts, they do not characterize the prediction uncertainty. In this work, we consider the time-series data as a random realization from a nonlinear state-space model and target Bayesian inference of the hidden states for probabilistic forecasting. We use particle flow as the tool for approximating the posterior distribution of the states, as it is shown to be highly effective in complex, high-dimensional settings. Thorough experimentation on several real world time-series datasets demonstrates that our approach provides better characterization of uncertainty while maintaining comparable accuracy to the state-of-the art point forecasting methods.
Rapid simultaneous acquisition of macromolecular tissue volume, susceptibility, and relaxometry maps
Fang Frank Yu
Susie Y. Huang
T. Witzel
Ashwin S. Kumar
Congyu Liao
Tanguy Duval
Berkin Bilgic
Purpose A major obstacle to the clinical implementation of quantitative MR is the lengthy acquisition time required to derive multi-contrast… (see more) parametric maps. We sought to reduce the acquisition time for quantitative susceptibility mapping (QSM) and macromolecular tissue volume (MTV) by acquiring both contrasts simultaneously by leveraging their redundancies. The Joint Virtual Coil concept with generalized autocalibrating partially parallel acquisitions (JVC-GRAPPA) was applied to reduce acquisition time further. Methods Three adult volunteers were imaged on a 3T scanner using a multi-echo 3D GRE sequence acquired at three head orientations. MTV, QSM, R2*, T1, and proton density maps were reconstructed. The same sequence (GRAPPA R=4) was performed in subject #1 with a single head orientation for comparison. Fully sampled data was acquired in subject #2, from which retrospective undersampling was performed (R=6 GRAPPA and R=9 JVC-GRAPPA). Prospective undersampling was performed in subject #3 (R=6 GRAPPA and R=9 JVC-GRAPPA) using gradient blips to shift k-space sampling in later echoes. Results Subject #1’s multi-orientation and single-orientation MTV maps were not significantly different based on RMSE. For subject #2, the retrospectively undersampled JVC-GRAPPA and GRAPPA generated similar results as fully sampled data. This approach was validated with the prospectively undersampled images in subject #3. Using QSM, R2*, and MTV, the contributions of myelin and iron content to susceptibility was estimated. Conclusion We have developed a novel strategy to simultaneously acquire data for the reconstruction of five intrinsically co-registered 1-mm isotropic resolution multi-parametric maps, with a scan time of 6 minutes using JVC-GRAPPA.
SpeechBrain: A General-Purpose Speech Toolkit
Titouan Parcollet
Peter William VanHarn Plantinga
Aku Rouhe
Samuele Cornell
Loren Lugosch
Nauman Dawalatabad
Abdelwahab HEBA
Jianyuan Zhong
Ju-Chieh Chou
Sung-Lin Yeh
Szu-Wei Fu
Chien-Feng Liao
Elena Rastorgueva
Franccois Grondin
William Aris
Hwidong Na
Yan Gao
Renato De Mori … (see 1 more)
SpeechBrain is an open-source and all-in-one speech toolkit. It is designed to facilitate the research and development of neural speech proc… (see more)essing technologies by being simple, flexible, user-friendly, and well-documented. This paper describes the core architecture designed to support several tasks of common interest, allowing users to naturally conceive, compare and share novel speech processing pipelines. SpeechBrain achieves competitive or state-of-the-art performance in a wide range of speech benchmarks. It also provides training recipes, pretrained models, and inference scripts for popular speech datasets, as well as tutorials which allow anyone with basic Python proficiency to familiarize themselves with speech technologies.
Understanding Capacity Saturation in Incremental Learning
Shenyang Huang
Vincent Francois-Lavet
Correcting Momentum in Temporal Difference Learning
A common optimization tool used in deep reinforcement learning is momentum, which consists in accumulating and discounting past gradients, r… (see more)eapplying them at each iteration. We argue that, unlike in supervised learning, momentum in Temporal Difference (TD) learning accumulates gradients that become doubly stale: not only does the gradient of the loss change due to parameter updates, the loss itself changes due to bootstrapping. We first show that this phenomenon exists, and then propose a first-order correction term to momentum. We show that this correction term improves sample efficiency in policy evaluation by correcting target value drift. An important insight of this work is that deep RL methods are not always best served by directly importing techniques from the supervised setting.