Publications

Towards Lifelong Self-Supervision For Unpaired Image-to-Image Translation
Victor Schmidt
Makesh Narsimhan Sreedhar
Mostafa ElAraby
Unpaired Image-to-Image Translation (I2IT) tasks often suffer from lack of data, a problem which self-supervised learning (SSL) has recently… (see more) been very popular and successful at tackling. Leveraging auxiliary tasks such as rotation prediction or generative colorization, SSL can produce better and more robust representations in a low data regime. Training such tasks along an I2IT task is however computationally intractable as model size and the number of task grow. On the other hand, learning sequentially could incur catastrophic forgetting of previously learned tasks. To alleviate this, we introduce Lifelong Self-Supervision (LiSS) as a way to pre-train an I2IT model (e.g., CycleGAN) on a set of self-supervised auxiliary tasks. By keeping an exponential moving average of past encoders and distilling the accumulated knowledge, we are able to maintain the network's validation performance on a number of tasks without any form of replay, parameter isolation or retraining techniques typically used in continual learning. We show that models trained with LiSS perform better on past tasks, while also being more robust than the CycleGAN baseline to color bias and entity entanglement (when two entities are very close).
Planning as Inference in Epidemiological Models
Andrew Warrington
Saeid Naderiparizi
Christian Dietrich Weilbach
Vaden Masrani
William Harvey
Adam Ścibior
Boyan Beronov
Seyed Ali Nasseri
In this work we demonstrate how existing software tools can be used to automate parts of infectious disease-control policy-making via perfor… (see more)ming inference in existing epidemiological dynamics models. The kind of inference tasks undertaken include computing, for planning purposes, the posterior distribution over putatively controllable, via direct policy-making choices, simulation model parameters that give rise to acceptable disease progression outcomes. Neither the full capabilities of such inference automation software tools nor their utility for planning is widely disseminated at the current time. Timely gains in understanding about these tools and how they can be used may lead to more fine-grained and less economically damaging policy prescriptions, particularly during the current COVID-19 pandemic.
Planning as Inference in Epidemiological Models
Andrew Warrington
Saeid Naderiparizi
Christian Dietrich Weilbach
Vaden Masrani
William Harvey
Adam Ścibior
Boyan Beronov
Seyed Ali Nasseri
In this work we demonstrate how existing software tools can be used to automate parts of infectious disease-control policy-making via perfor… (see more)ming inference in existing epidemiological dynamics models. The kind of inference tasks undertaken include computing, for planning purposes, the posterior distribution over putatively controllable, via direct policy-making choices, simulation model parameters that give rise to acceptable disease progression outcomes. Neither the full capabilities of such inference automation software tools nor their utility for planning is widely disseminated at the current time. Timely gains in understanding about these tools and how they can be used may lead to more fine-grained and less economically damaging policy prescriptions, particularly during the current COVID-19 pandemic.
Coping With Simulators That Don't Always Return
Andrew Warrington
Saeid Naderiparizi
Deterministic models are approximations of reality that are easy to interpret and often easier to build than stochastic alternatives. Unfort… (see more)unately, as nature is capricious, observational data can never be fully explained by deterministic models in practice. Observation and process noise need to be added to adapt deterministic models to behave stochastically, such that they are capable of explaining and extrapolating from noisy data. We investigate and address computational inefficiencies that arise from adding process noise to deterministic simulators that fail to return for certain inputs; a property we describe as "brittle." We show how to train a conditional normalizing flow to propose perturbations such that the simulator succeeds with high probability, increasing computational efficiency.
A Distributional Analysis of Sampling-Based Reinforcement Learning Algorithms
We present a distributional approach to theoretical analyses of reinforcement learning algorithms for constant step-sizes. We demonstrate it… (see more)s effectiveness by presenting simple and unified proofs of convergence for a variety of commonly-used methods. We show that value-based methods such as TD(
Atypical brain asymmetry in autism – a candidate for clinically meaningful stratification
Dorothea L. Floris
Thomas Wolfers
Mariam Zabihi
Nathalie E. Holz
Christine Ecker
Flavio Dell’Acqua
Simon Baron-Cohen
Rosemary Holt
Sarah Durston
Eva Loth
Andre Marquand
Christian Beckmann
Jumana Ahmad
Sara Ambrosino
Bonnie Auyeung
Tobias Banaschewski
Sarah Baumeister
Sven Bölte
Thomas Bourgeron
Carsten Bours … (see 51 more)
Michael Brammer
Daniel Brandeis
Claudia Brogna
Yvette de Bruijn
Jan K. Buitelaar
Bhismadev Chakrabarti
Tony Charman
Ineke Cornelissen
Daisy Crawley
Jessica Faulkner
Vincent Frouin
Pilar Garcés
David Goyard
Lindsay Ham
Hannah Hayward
Joerg F. Hipp
Mark Johnson
Emily J. H. Jones
Prantik Kundu
Meng-Chuan Lai
Xavier Liogier D’ardhuy
Michael V. Lombardo
David J. Lythgoe
René Mandl
Luke Mason
Maarten Mennes
Andreas Meyer-Lindenberg
Carolin Moessnang
Nico Mueller
Declan Murphy
Beth Oakley
Laurence O’Dwyer
Marianne Oldehinkel
Bob Oranje
Gahan Pandina
Antonio Persico
Barbara Ruggeri
Amber N. V. Ruigrok
Jessica Sabet
Roberto Sacco
Antonia San José Cáceres
Emily Simonoff
Will Spooren
Julian Tillmann
Roberto Toro
Heike Tost
Jack Waldman
Steve C. R. Williams
Caroline Wooldridge
Marcel P. Zwiers
Overview of the TREC 2019 Fair Ranking Track
Asia J. Biega
Michael D. Ekstrand
Sebastian Kohlmeier
Overview of the TREC 2019 Fair Ranking Track
Asia J. Biega
Michael D. Ekstrand
Sebastian Kohlmeier
The goal of the TREC Fair Ranking track was to develop a benchmark for evaluating retrieval systems in terms of fairness to different conten… (see more)t providers in addition to classic notions of relevance. As part of the benchmark, we defined standardized fairness metrics with evaluation protocols and released a dataset for the fair ranking problem. The 2019 task focused on reranking academic paper abstracts given a query. The objective was to fairly represent relevant authors from several groups that were unknown at the system submission time. Thus, the track emphasized the development of systems which have robust performance across a variety of group definitions. Participants were provided with querylog data (queries, documents, and relevance) from Semantic Scholar. This paper presents an overview of the track, including the task definition, descriptions of the data and the annotation process, as well as a comparison of the performance of submitted systems.
Multinational Investigation of Fracture Risk with Antidepressant Use by Class, Drug, and Indication
Robyn Tamblyn
David W. Bates
William G. Dixon
Nadyne Girard
Jennifer S. Haas
Bettina Habib
Usman Iqbal
Jack Li
Therese Sheppard
Antidepressants increase the risk of falls and fracture in older adults. However, risk estimates vary considerably even in comparable popula… (see more)tions, limiting the usefulness of current evidence for clinical decision making. Our aim was to apply a common protocol to cohorts of older antidepressant users in multiple jurisdictions to estimate fracture risk associated with different antidepressant classes, drugs, doses, and potential treatment indications.
Online Fast Adaptation and Knowledge Accumulation: a New Approach to Continual Learning
Massimo Caccia
Pau Rodriguez
Oleksiy Ostapenko
Fabrice Normandin
Min Lin
Lucas Caccia
Issam Hadj Laradji
Alexande Lacoste
David Vazquez
Improving Convolutional Neural Networks Via Conservative Field Regularisation and Integration
Sofiane Wozniak Achiche
Maxime Raison
Tensorized Random Projections
Beheshteh T. Rakhshan
We introduce a novel random projection technique for efficiently reducing the dimension of very high-dimensional tensors. Building upon clas… (see more)sical results on Gaussian random projections and Johnson-Lindenstrauss transforms~(JLT), we propose two tensorized random projection maps relying on the tensor train~(TT) and CP decomposition format, respectively. The two maps offer very low memory requirements and can be applied efficiently when the inputs are low rank tensors given in the CP or TT format. Our theoretical analysis shows that the dense Gaussian matrix in JLT can be replaced by a low-rank tensor implicitly represented in compressed form with random factors, while still approximately preserving the Euclidean distance of the projected inputs. In addition, our results reveal that the TT format is substantially superior to CP in terms of the size of the random projection needed to achieve the same distortion ratio. Experiments on synthetic data validate our theoretical analysis and demonstrate the superiority of the TT decomposition.