Publications

Population Genomics Approaches for Genetic Characterization of SARS-CoV-2 Lineages
Isabel Gamache
Arnaud N'Guessan
Justin Pelletier
Carmen Lia Murall
Vanda Gaonac’h-Lovejoy
David J. Hamelin
Raphaël Poujol
Jean-Christophe Grenier
Martin Smith
Etienne Caron
Morgan Craig
B. Jesse Shapiro
Julie G. Hussin
The genome of the Severe Acute Respiratory Syndrome coronavirus 2 (SARS-CoV-2), the pathogen that causes coronavirus disease 2019 (COVID-19)… (voir plus), has been sequenced at an unprecedented scale leading to a tremendous amount of viral genome sequencing data. To assist in tracing infection pathways and design preventive strategies, a deep understanding of the viral genetic diversity landscape is needed. We present here a set of genomic surveillance tools from population genetics which can be used to better understand the evolution of this virus in humans. To illustrate the utility of this toolbox, we detail an in depth analysis of the genetic diversity of SARS-CoV-2 in first year of the COVID-19 pandemic. We analyzed 329,854 high-quality consensus sequences published in the GISAID database during the pre-vaccination phase. We demonstrate that, compared to standard phylogenetic approaches, haplotype networks can be computed efficiently on much larger datasets. This approach enables real-time lineage identification, a clear description of the relationship between variants of concern, and efficient detection of recurrent mutations. Furthermore, time series change of Tajima's D by haplotype provides a powerful metric of lineage expansion. Finally, principal component analysis (PCA) highlights key steps in variant emergence and facilitates the visualization of genomic variation in the context of SARS-CoV-2 diversity. The computational framework presented here is simple to implement and insightful for real-time genomic surveillance of SARS-CoV-2 and could be applied to any pathogen that threatens the health of populations of humans and other organisms.
Selective Credit Assignment
Diana Borsa
Hado Philip van Hasselt
Transformation Coding: Simple Objectives for Equivariant Representations
On the Performance Implications of Deploying IoT Apps as FaaS
Mohab Aly
Soumaya Yacout
Gradients without Backpropagation
Atilim Güneş Baydin
Barak A. Pearlmutter
Don Syme
Frank N. Wood
Philip Torr
Novel informatics approaches to COVID-19 Research: From methods to applications
Huanan Xu
David L Buckeridge
Yi Wang
P. Tarczy-Hornoch
Advanced Diffusion MR Imaging for Multiple Sclerosis in the Brain and Spinal Cord
Masaaki Hori
Tomoko Maekawa
Kouhei Kamiya
Akifumi Hagiwara
Masami Goto
Mariko Yoshida Takemura
Shohei Fujita
Christina Andica
Koji Kamagata
Shigeki Aoki
Diffusion tensor imaging (DTI) has been established its usefulness in evaluating normal-appearing white matter (NAWM) and other lesions that… (voir plus) are difficult to evaluate with routine clinical MRI in the evaluation of the brain and spinal cord lesions in multiple sclerosis (MS), a demyelinating disease. With the recent advances in the software and hardware of MRI systems, increasingly complex and sophisticated MRI and analysis methods, such as q-space imaging, diffusional kurtosis imaging, neurite orientation dispersion and density imaging, white matter tract integrity, and multiple diffusion encoding, referred to as advanced diffusion MRI, have been proposed. These are capable of capturing in vivo microstructural changes in the brain and spinal cord in normal and pathological states in greater detail than DTI. This paper reviews the current status of recent advanced diffusion MRI for assessing MS in vivo as part of an issue celebrating two decades of magnetic resonance in medical sciences (MRMS), an official journal of the Japanese Society of Magnetic Resonance in Medicine.
Halting Time is Predictable for Large Models: A Universality Property and Average-case Analysis
Average-case analysis computes the complexity of an algorithm averaged over all possible inputs. Compared to worst-case analysis, it is more… (voir plus) representative of the typical behavior of an algorithm, but remains largely unexplored in optimization. One difficulty is that the analysis can depend on the probability distribution of the inputs to the model. However, we show that this is not the case for a class of large-scale problems trained with first-order methods including random least squares and one-hidden layer neural networks with random weights. In fact, the halting time exhibits a universality property: it is independent of the probability distribution. With this barrier for average-case analysis removed, we provide the first explicit average-case convergence rates showing a tighter complexity not captured by traditional worst-case analysis. Finally, numerical simulations suggest this universality property holds for a more general class of algorithms and problems.
Geographic concentration of SARS-CoV-2 cases by social determinants of health in metropolitan areas in Canada: a cross-sectional study
Yiqing Xia
Huiting Ma
Gary Moloney
Héctor A. Velásquez García
Monica Sirski
Naveed Z. Janjua
David Vickers
Tyler Williamson
Alan Katz
Kristy Yiu
Kristy Yiu
Rafal Kustra
David L. Buckeridge
Marc Brisson
Stefan D. Baral
Sharmistha Mishra
Mathieu Maheu-Giroux
The feature of geographical concentration of COVID-19 cases was consistent across CMAs, but the pattern by social determinants varied. Geogr… (voir plus)aphically-prioritized allocation of resources and services should be tailored to the local drivers of inequalities in transmission in response to SARS-CoV-2’s resurgence.
Erratum to: Rapid simultaneous acquisition of macromolecular tissue volume, susceptibility, and relaxometry maps (Magn Reson Med. 2022;87:781‐790.)
Fang Frank Yu
Susie Yi Huang
Ashwin Kumar
Thomas Witzel
Congyu Liao
Tanguy Duval
Berkin Bilgic
The Brain-Computer Metaphor Debate Is Useless: A Matter of Semantics
Blake A. Richards
Timothy P. Lillicrap
It is commonly assumed that usage of the word “computer” in the brain sciences reflects a metaphor. However, there is no single definiti… (voir plus)on of the word “computer” in use. In fact, based on the usage of the word “computer” in computer science, a computer is merely some physical machinery that can in theory compute any computable function. According to this definition the brain is literally a computer; there is no metaphor. But, this deviates from how the word “computer” is used in other academic disciplines. According to the definition used outside of computer science, “computers” are human-made devices that engage in sequential processing of inputs to produce outputs. According to this definition, brains are not computers, and arguably, computers serve as a weak metaphor for brains. Thus, we argue that the recurring brain-computer metaphor debate is actually just a semantic disagreement, because brains are either literally computers or clearly not very much like computers at all, depending on one's definitions. We propose that the best path forward is simply to put the debate to rest, and instead, have researchers be clear about which definition they are using in their work. In some circumstances, one can use the definition from computer science and simply ask, what type of computer is the brain? In other circumstances, it is important to use the other definition, and to clarify the ways in which our brains are radically different from the laptops, smartphones, and servers that surround us in modern life.
Minimizing Entropy to Discover Good Solutions to Recurrent Mixed Integer Programs
Charly Robinson La Rocca
Jean-François Cordeau
Current state-of-the-art solvers for mixed-integer programming (MIP) problems are designed to perform well on a wide range of problems. Howe… (voir plus)ver, for many real-world use cases, problem instances come from a narrow distribution. This has motivated the development of specialized methods that can exploit the information in historical datasets to guide the design of heuristics. Recent works have shown that machine learning (ML) can be integrated with an MIP solver to inject domain knowledge and efficiently close the optimality gap. This hybridization is usually done with deep learning (DL), which requires a large dataset and extensive hyperparameter tuning to perform well. This paper proposes an online heuristic that uses the notion of entropy to efficiently build a model with minimal training data and tuning. We test our method on the locomotive assignment problem (LAP), a recurring real-world problem that is challenging to solve at scale. Experimental results show a speed up of an order of magnitude compared to a general purpose solver (CPLEX) with a relative gap of less than 2%. We also observe that for some instances our method can discover better solutions than CPLEX within the time limit.