Publications

Multi-language design smells: a backstage perspective
Mouna Abidi
Moses Openja
Md Saidur Rahman
Multi-language design smells: a backstage perspective
Mouna Abidi
Md. Saidur Rahman
Moses Openja
Bayesian latent multi‐state modeling for nonequidistant longitudinal electronic health records
Yu Luo
David A. Stephens
Aman Verma
Both New and Chronic Potentially Inappropriate Medications Continued at Hospital Discharge Are Associated With Increased Risk of Adverse Events
Daniala L Weir
Todd C. Lee
Emily G. McDonald
Aude Motulsky
Michal Abrahamowicz
Steven Morgan
Robyn Tamblyn
Burst-dependent synaptic plasticity can coordinate learning in hierarchical circuits
Alexandre Payeur
Jordan Guerguiev
Friedemann Zenke
Richard Naud
Towards Lifelong Self-Supervision For Unpaired Image-to-Image Translation
Victor Schmidt
Makesh Narsimhan Sreedhar
Mostafa ElAraby
Unpaired Image-to-Image Translation (I2IT) tasks often suffer from lack of data, a problem which self-supervised learning (SSL) has recently… (voir plus) been very popular and successful at tackling. Leveraging auxiliary tasks such as rotation prediction or generative colorization, SSL can produce better and more robust representations in a low data regime. Training such tasks along an I2IT task is however computationally intractable as model size and the number of task grow. On the other hand, learning sequentially could incur catastrophic forgetting of previously learned tasks. To alleviate this, we introduce Lifelong Self-Supervision (LiSS) as a way to pre-train an I2IT model (e.g., CycleGAN) on a set of self-supervised auxiliary tasks. By keeping an exponential moving average of past encoders and distilling the accumulated knowledge, we are able to maintain the network's validation performance on a number of tasks without any form of replay, parameter isolation or retraining techniques typically used in continual learning. We show that models trained with LiSS perform better on past tasks, while also being more robust than the CycleGAN baseline to color bias and entity entanglement (when two entities are very close).
Planning as Inference in Epidemiological Models
Andrew Warrington
Saeid Naderiparizi
Christian Dietrich Weilbach
Vaden Masrani
William Harvey
Adam Ścibior
Boyan Beronov
Seyed Ali Nasseri
In this work we demonstrate how existing software tools can be used to automate parts of infectious disease-control policy-making via perfor… (voir plus)ming inference in existing epidemiological dynamics models. The kind of inference tasks undertaken include computing, for planning purposes, the posterior distribution over putatively controllable, via direct policy-making choices, simulation model parameters that give rise to acceptable disease progression outcomes. Neither the full capabilities of such inference automation software tools nor their utility for planning is widely disseminated at the current time. Timely gains in understanding about these tools and how they can be used may lead to more fine-grained and less economically damaging policy prescriptions, particularly during the current COVID-19 pandemic.
Planning as Inference in Epidemiological Models
Andrew Warrington
Saeid Naderiparizi
Christian Dietrich Weilbach
Vaden Masrani
William Harvey
Adam Ścibior
Boyan Beronov
Seyed Ali Nasseri
In this work we demonstrate how existing software tools can be used to automate parts of infectious disease-control policy-making via perfor… (voir plus)ming inference in existing epidemiological dynamics models. The kind of inference tasks undertaken include computing, for planning purposes, the posterior distribution over putatively controllable, via direct policy-making choices, simulation model parameters that give rise to acceptable disease progression outcomes. Neither the full capabilities of such inference automation software tools nor their utility for planning is widely disseminated at the current time. Timely gains in understanding about these tools and how they can be used may lead to more fine-grained and less economically damaging policy prescriptions, particularly during the current COVID-19 pandemic.
Coping With Simulators That Don't Always Return
Andrew Warrington
Saeid Naderiparizi
Deterministic models are approximations of reality that are easy to interpret and often easier to build than stochastic alternatives. Unfort… (voir plus)unately, as nature is capricious, observational data can never be fully explained by deterministic models in practice. Observation and process noise need to be added to adapt deterministic models to behave stochastically, such that they are capable of explaining and extrapolating from noisy data. We investigate and address computational inefficiencies that arise from adding process noise to deterministic simulators that fail to return for certain inputs; a property we describe as "brittle." We show how to train a conditional normalizing flow to propose perturbations such that the simulator succeeds with high probability, increasing computational efficiency.
A Distributional Analysis of Sampling-Based Reinforcement Learning Algorithms
We present a distributional approach to theoretical analyses of reinforcement learning algorithms for constant step-sizes. We demonstrate it… (voir plus)s effectiveness by presenting simple and unified proofs of convergence for a variety of commonly-used methods. We show that value-based methods such as TD(
Atypical brain asymmetry in autism – a candidate for clinically meaningful stratification
Dorothea L. Floris
Thomas Wolfers
Mariam Zabihi
Nathalie E. Holz
Christine Ecker
Flavio Dell’Acqua
Simon Baron-Cohen
Rosemary Holt
Sarah Durston
Eva Loth
Andre Marquand
Christian Beckmann
Jumana Ahmad
Sara Ambrosino
Bonnie Auyeung
Tobias Banaschewski
Sarah Baumeister
Sven Bölte
Thomas Bourgeron
Carsten Bours … (voir 51 de plus)
Michael Brammer
Daniel Brandeis
Claudia Brogna
Yvette de Bruijn
Jan K. Buitelaar
Bhismadev Chakrabarti
Tony Charman
Ineke Cornelissen
Daisy Crawley
Jessica Faulkner
Vincent Frouin
Pilar Garcés
David Goyard
Lindsay Ham
Hannah Hayward
Joerg F. Hipp
Mark Johnson
Emily J. H. Jones
Prantik Kundu
Meng-Chuan Lai
Xavier Liogier D’ardhuy
Michael V. Lombardo
David J. Lythgoe
René Mandl
Luke Mason
Maarten Mennes
Andreas Meyer-Lindenberg
Carolin Moessnang
Nico Mueller
Declan Murphy
Beth Oakley
Laurence O’Dwyer
Marianne Oldehinkel
Bob Oranje
Gahan Pandina
Antonio Persico
Barbara Ruggeri
Amber N. V. Ruigrok
Jessica Sabet
Roberto Sacco
Antonia San José Cáceres
Emily Simonoff
Will Spooren
Julian Tillmann
Roberto Toro
Heike Tost
Jack Waldman
Steve C. R. Williams
Caroline Wooldridge
Marcel P. Zwiers
Overview of the TREC 2019 Fair Ranking Track
Asia J. Biega
Michael D. Ekstrand
Sebastian Kohlmeier