Publications

Multi-Domain Balanced Sampling Improves Out-of-Distribution Generalization of Chest X-ray Pathology Prediction Models
Enoch Amoatey Tetteh
Joseph D Viviano
Joseph Paul Cohen
Learning models that generalize under different distribution shifts in medical imaging has been a long-standing research challenge. There ha… (see more)ve been several proposals for efficient and robust visual representation learning among vision research practitioners, especially in the sensitive and critical biomedical domain. In this paper, we propose an idea for out-of-distribution generalization of chest X-ray pathologies that uses a simple balanced batch sampling technique. We observed that balanced sampling between the multiple training datasets improves the performance over baseline models trained without balancing.
COVID-19 Seroprevalence in Canada Modelling Waning and Boosting COVID-19 Immunity in Canada a Canadian Immunization Research Network Study
David W. Dick
Lauren Childs
Zhilan Feng
Jing Li
Gergely Röst
Nick H. Ogden
Jane Heffernan
Fall 2021 Resurgence and COVID-19 Seroprevalence in Canada: Modelling waning and boosting COVID-19 immunity in Canada, A Canadian Immunization Research Network Study
David W. Dick
Lauren Childs
Zhilan Feng
Jing Li
Gergely Röst
Nick H. Ogden
Jane Heffernan
Generative Models of Brain Dynamics -- A review
Mahta Ramezanian Panahi
Germán Abrevaya
Jean-Christophe Gagnon-Audet
Vikram Voleti
The principled design and discovery of biologically- and physically-informed models of neuronal dynamics has been advancing since the mid-tw… (see more)entieth century. Recent developments in artificial intelligence (AI) have accelerated this progress. This review article gives a high-level overview of the approaches across different scales of organization and levels of abstraction. The studies covered in this paper include fundamental models in computational neuroscience, nonlinear dynamics, data-driven methods, as well as emergent practices. While not all of these models span the intersection of neuroscience, AI, and system dynamics, all of them do or can work in tandem as generative models, which, as we argue, provide superior properties for the analysis of neuroscientific data. We discuss the limitations and unique dynamical traits of brain data and the complementary need for hypothesis- and data-driven modeling. By way of conclusion, we present several hybrid generative models from recent literature in scientific machine learning, which can be efficiently deployed to yield interpretable models of neural dynamics.
Recovery after stroke: the severely impaired are a distinct group
Anna K. Bonkhoff
Thomas Hope
Adrian G Guggisberg
Rachel L Hawe
Sean P Dukelow
F. Chollet
D. X. Lin
Christian Grefkes
Howard Bowman
The Myelin‐Weighted Connectome in Parkinson's Disease
Tommy Boshkovski
Bratislav Mišić
Isabelle Arnulf
Jean‐Christophe Corvol
Marie Vidailhet
Stéphane Lehéricy
Nikola Stikov
Matteo Mancini
A Cost-Efficient Metadata Scheme for High-Performance Deduplication Systems
Yuxuan Mo
Yu Hua
Pengfei Li
Qin Cao
Data deduplication has been widely used in backup systems to eliminate redundant data, which speeds up the backup process and reduces the st… (see more)orage overhead. Deduplication packs multiple chunks into a large, fixed-size container as a storage unit to maintain the locality and achieve efficient compression. We observe that the traditional containers have low filling ratios due to a large amount of metadata generated by small files. Unfilled containers require more space to store a backup, which decreases the storage efficiency and reduces restore performance. In order to address this problem, we propose a Metadata region Adaptive Container Structure, called MACS. MACS maintains a tag to record the length of metadata region in the container. The boundary between meta-data region and data region is dynamically decided to ensure the maximum space efficiency of the containers. Moreover, we propose a container metadata length-based indexing and cache replacement strategy to allow MACS to be practical in data backup systems. We demonstrate the advantages of MACS with three real world backup datasets. MACS achieves over 95% average container filling ratio, which is significantly higher than existing designs. MACS further achieves better restore performance than the traditional container structure. When combined with existing rewriting method, MACS achieves an efficient trade-off between deduplication ratio and restore performance.
Faults in deep reinforcement learning programs: a taxonomy and a detection approach
Amin Nikanjam
Mohammad Mehdi Morovati
Houssem Ben Braiek
Robustness of Markov perfect equilibrium to model approximations in general-sum dynamic games
Jayakumar Subramanian
Dynamic games (also called stochastic games or Markov games) are an important class of games for modeling multi-agent interactions. In many … (see more)situations, the dynamics and reward functions of the game are learnt from past data and are therefore approximate. In this paper, we study the robustness of Markov perfect equilibrium to approximations in reward and transition functions. Using approximation results from Markov decision processes, we show that the Markov perfect equilibrium of an approximate (or perturbed) game is always an approximate Markov perfect equilibrium of the original game. We provide explicit bounds on the approximation error in terms of three quantities: (i) the error in approximating the reward functions, (ii) the error in approximating the transition function, and (iii) a property of the value function of the MPE of the approximate game. The second and third quantities depend on the choice of metric on probability spaces. We also present coarser upper bounds which do not depend on the value function but only depend on the properties of the reward and transition functions of the approximate game. We illustrate the results via a numerical example.
Does Pre-training Induce Systematic Inference? How Masked Language Models Acquire Commonsense Knowledge
Transformer models pre-trained with a masked-language-modeling objective (e.g., BERT) encode commonsense knowledge as evidenced by behaviora… (see more)l probes; however, the extent to which this knowledge is acquired by systematic inference over the semantics of the pre-training corpora is an open question. To answer this question, we selectively inject verbalized knowledge into the pre-training minibatches of BERT and evaluate how well the model generalizes to supported inferences after pre-training on the injected knowledge. We find generalization does not improve over the course of pre-training BERT from scratch, suggesting that commonsense knowledge is acquired from surface-level, co-occurrence patterns rather than induced, systematic reasoning.
Neural Column Generation for Capacitated Vehicle Routing
Behrouz Babaki
Sanjay Dominik Jena
The column generation technique is essential for solving linear programs with an exponential number of variables. Many important application… (see more)s such as the vehicle routing problem (VRP) now require it. However, in practice, getting column generation to converge is challenging. It often ends up adding too many columns. In this work, we frame the problem of selecting which columns to add as one of sequential decision-making. We propose a neural column generation architecture that iteratively selects columns to be added to the problem. The architecture, inspired by stabilization techniques, first predicts the optimal duals. These predictions are then used to obtain the columns to add. We show using VRP instances that in this setting several machine learning models yield good performance on the task and that our proposed architecture learned using imitation learning outperforms a modern stabilization technique.
Generative Adversarial Networks
Ian G Goodfellow
Jean Pouget-Abadie
Mehdi Mirza
Bing Xu
David Warde-Farley
Sherjil Ozair
Generative Adversarial Networks (GANs) are a type of deep learning techniques that have shown remarkable success in generating realistic ima… (see more)ges, videos, and other types of data. This paper provides a comprehensive guide to GANs, covering their architecture, loss functions, training methods, applications, evaluation metrics, challenges, and future directions. We begin with an introduction to GANs and their historical development, followed by a review of the background and related work. We then provide a detailed overview of the GAN architecture, including the generator and discriminator networks, and discuss the key design choices and variations. Next, we review the loss functions utilized in GANs, including the original minimax objective, as well as more recent approaches s.a. Wasserstein distance and gradient penalty. We then delve into the training of GANs, discussing common techniques s.a. alternating optimization, minibatch discrimination, and spectral normalization. We also provide a survey of the various applications of GANs across domains. In addition, we review the evaluation metrics utilized to assess the diversity and quality of GAN-produced data. Furthermore, we discuss the challenges and open issues in GANs, including mode collapse, training instability, and ethical considerations. Finally, we provide a glimpse into the future directions of GAN research, including improving scalability, developing new architectures, incorporating domain knowledge, and exploring new applications. Overall, this paper serves as a comprehensive guide to GANs, providing both theoretical and practical insights for researchers and practitioners in the field.