Publications

''COGITO in Space'': a thought experiment in exo-neurobiology
Daniela de Paulis
Stephen Whitmarsh
Robert Oostenveld
Michael Sanders
SeroTracker: a global SARS-CoV-2 seroprevalence dashboard
Rahul K. Arora
Abel Joseph
Jordan Van Wyk
Simona Rocco
Austin Atmaja
Ewan May
Tingting Yan
Niklas Bobrovitz
Jonathan Chevrier
Matthew P. Cheng
Tyler Williamson
Implicit Regularization in Deep Learning: A View from Function Space
Aristide Baratin
Thomas George
César Laurent
Implicit Regularization in Deep Learning: A View from Function Space
Aristide Baratin
Thomas George
César Laurent
We approach the problem of implicit regularization in deep learning from a geometrical viewpoint. We highlight a possible regularization eff… (voir plus)ect induced by a dynamical alignment of the neural tangent features introduced by Jacot et al, along a small number of task-relevant directions. By extrapolating a new analysis of Rademacher complexity bounds in linear models, we propose and study a new heuristic complexity measure for neural networks which captures this phenomenon, in terms of sequences of tangent kernel classes along in the learning trajectories.
BDD-based optimization for the quadratic stable set problem
Jaime E. González
Andr'e Augusto Cire
Louis-Martin Rousseau
BDD-based optimization for the quadratic stable set problem
Jaime E. González
Andr'e Augusto Cire
Louis-Martin Rousseau
Optimal Local and Remote Controllers With Unreliable Uplink Channels: An Elementary Proof
Mohammad Afshari
Recently, a model of a decentralized control system with local and remote controllers connected over unreliable channels was presented in [… (voir plus)1]. The model has a nonclassical information structure that is not partially nested. Nonetheless, it is shown in [1] that the optimal control strategies are linear functions of the state estimate (which is a nonlinear function of the observations). Their proof is based on a fairly sophisticated dynamic programming argument. In this article, we present an alternative and elementary proof of the result which uses common information-based conditional independence and completion of squares.
Precision, Equity, and Public Health and Epidemiology Informatics – A Scoping Review
Renewal Monte Carlo: Renewal Theory-Based Reinforcement Learning
Jayakumar Subramanian
An online reinforcement learning algorithm called renewal Monte Carlo (RMC) is presented. RMC works for infinite horizon Markov decision pro… (voir plus)cesses with a designated start state. RMC is a Monte Carlo algorithm that retains the key advantages of Monte Carlo—viz., simplicity, ease of implementation, and low bias—while circumventing the main drawbacks of Monte Carlo—viz., high variance and delayed updates. Given a parameterized policy
Neuronal activity remodels the F-actin based submembrane lattice in dendrites but not axons of hippocampal neurons
Flavie Lavoie-Cardinal
Anthony Bilodeau
Mado Lemieux
Marc-André Gardner
Theresa Wiesner
Gabrielle Laramée
Paul De Koninck
Survey on Applications of Multi-Armed and Contextual Bandits
Djallel Bouneffouf
Charu Aggarwal
In recent years, the multi-armed bandit (MAB) framework has attracted a lot of attention in various applications, from recommender systems a… (voir plus)nd information retrieval to healthcare and finance. This success is due to its stellar performance combined with attractive properties, such as learning from less feedback. The multiarmed bandit field is currently experiencing a renaissance, as novel problem settings and algorithms motivated by various practical applications are being introduced, building on top of the classical bandit problem. This article aims to provide a comprehensive review of top recent developments in multiple real-life applications of the multi-armed bandit. Specifically, we introduce a taxonomy of common MAB-based applications and summarize the state-of-the-art for each of those domains. Furthermore, we identify important current trends and provide new perspectives pertaining to the future of this burgeoning field.
Extendable and invertible manifold learning with geometry regularized autoencoders
Andres F. Duque Correa
Sacha Morin
Kevin R. Moon
A fundamental task in data exploration is to extract simplified low dimensional representations that capture intrinsic geometry in data, esp… (voir plus)ecially for faithfully visualizing data in two or three dimensions. Common approaches to this task use kernel methods for manifold learning. However, these methods typically only provide an embedding of fixed input data and cannot extend to new data points. Autoencoders have also recently become popular for representation learning. But while they naturally compute feature extractors that are both extendable to new data and invertible (i.e., reconstructing original features from latent representation), they have limited capabilities to follow global intrinsic geometry compared to kernel-based manifold learning. We present a new method for integrating both approaches by incorporating a geometric regularization term in the bottleneck of the autoencoder. Our regularization, based on the diffusion potential distances from the recently-proposed PHATE visualization method, encourages the learned latent representation to follow intrinsic data geometry, similar to manifold learning algorithms, while still enabling faithful extension to new data and reconstruction of data in the original feature space from latent coordinates. We compare our approach with leading kernel methods and autoencoder models for manifold learning to provide qualitative and quantitative evidence of our advantages in preserving intrinsic structure, out of sample extension, and reconstruction. Our method is easily implemented for big-data applications, whereas other methods are limited in this regard.