Publications

Clustering units in neural networks: upstream vs downstream information
Richard D Lange
Konrad Paul Kording
It has been hypothesized that some form of"modular"structure in artificial neural networks should be useful for learning, compositionality, … (voir plus)and generalization. However, defining and quantifying modularity remains an open problem. We cast the problem of detecting functional modules into the problem of detecting clusters of similar-functioning units. This begs the question of what makes two units functionally similar. For this, we consider two broad families of methods: those that define similarity based on how units respond to structured variations in inputs ("upstream"), and those based on how variations in hidden unit activations affect outputs ("downstream"). We conduct an empirical study quantifying modularity of hidden layer representations of simple feedforward, fully connected networks, across a range of hyperparameters. For each model, we quantify pairwise associations between hidden units in each layer using a variety of both upstream and downstream measures, then cluster them by maximizing their"modularity score"using established tools from network science. We find two surprising results: first, dropout dramatically increased modularity, while other forms of weight regularization had more modest effects. Second, although we observe that there is usually good agreement about clusters within both upstream methods and downstream methods, there is little agreement about the cluster assignments across these two families of methods. This has important implications for representation-learning, as it suggests that finding modular representations that reflect structure in inputs (e.g. disentanglement) may be a distinct goal from learning modular representations that reflect structure in outputs (e.g. compositionality).
Studying the Practices of Deploying Machine Learning Projects on Docker
Moses Openja
Forough Majidi
Bhagya Chembakottu
Heng Li
A Unified Approach to Reinforcement Learning, Quantal Response Equilibria, and Two-Player Zero-Sum Games
Samuel Sokota
Ryan D’orazio
J. Z. Kolter
Nicolas Loizou
Marc Lanctot
Noam Brown
Christian Kroer
This work studies an algorithm, which we call magnetic mirror descent, that is inspired by mirror descent and the non-Euclidean proximal gra… (voir plus)dient algorithm. Our contribution is demonstrating the virtues of magnetic mirror descent as both an equilibrium solver and as an approach to reinforcement learning in two-player zero-sum games. These virtues include: 1) Being the first quantal response equilibria solver to achieve linear convergence for extensive-form games with first order feedback; 2) Being the first standard reinforcement learning algorithm to achieve empirically competitive results with CFR in tabular settings; 3) Achieving favorable performance in 3x3 Dark Hex and Phantom Tic-Tac-Toe as a self-play deep reinforcement learning algorithm.
Leveraging Integer Linear Programming to Learn Optimal Fair Rule Lists
Julien Ferry
Sébastien Gambs
Marie-José Huguet
Mohamed
Siala
On Neural Architecture Inductive Biases for Relational Tasks
Current deep learning approaches have shown good in-distribution generalization performance, but struggle with out-of-distribution generaliz… (voir plus)ation. This is especially true in the case of tasks involving abstract relations like recognizing rules in sequences, as we find in many intelligence tests. Recent work has explored how forcing relational representations to remain distinct from sensory representations, as it seems to be the case in the brain, can help artificial systems. Building on this work, we further explore and formalize the advantages afforded by 'partitioned' representations of relations and sensory details, and how this inductive bias can help recompose learned relational structure in newly encountered settings. We introduce a simple architecture based on similarity scores which we name Compositional Relational Network (CoRelNet). Using this model, we investigate a series of inductive biases that ensure abstract relations are learned and represented distinctly from sensory data, and explore their effects on out-of-distribution generalization for a series of relational psychophysics tasks. We find that simple architectural choices can outperform existing models in out-of-distribution generalization. Together, these results show that partitioning relational representations from other information streams may be a simple way to augment existing network architectures' robustness when performing out-of-distribution relational computations.
Few-shot Question Generation for Personalized Feedback in Intelligent Tutoring Systems
Devang Kulshreshtha
Muhammad Shayan
Robert Belfer
Iulian V. Serban
Ekaterina Kochmar
Sequential Density Estimation via NCWFAs Sequential Density Estimation via Nonlinear Continuous Weighted Finite Automata
Tianyu Li
Bogdan Mazoure
Weighted finite automata (WFAs) have been widely applied in many fields. One of the classic problems for WFAs is probability distribution es… (voir plus)timation over sequences of discrete symbols. Although WFAs have been extended to deal with continuous input data, namely continuous WFAs (CWFAs), it is still unclear how to approximate density functions over sequences of continuous random variables using WFA-based models, due to the limitation on the expressiveness of the model as well as the tractability of approximating density functions via CWFAs. In this paper, we propose a nonlinear extension to the CWFA model to first improve its expressiveness, we refer to it as the nonlinear continuous WFAs (NCWFAs). Then we leverage the so-called RNADE method, which is a well-known density estimator based on neural networks, and propose the RNADE-NCWFA model. The RNADE-NCWFA model computes a density function by design. We show that this model is strictly more expressive than the Gaussian HMM model, which CWFA cannot approximate. Empirically, we conduct a synthetic experiment using Gaussian HMM generated data. We focus on evaluating the model's ability to estimate densities for sequences of varying lengths (longer length than the training data). We observe that our model performs the best among the compared baseline methods.
Interacting brains revisited: A cross‐brain network neuroscience perspective
Christian Gerloff
Kerstin Konrad
Christina Büsing
Vanessa Reindl
Technologically-assisted communication attenuates inter-brain synchrony
Linoy Schwartz
Jonathan Levy
Yaara Endevelt-Shapira
Amir Djalovski
Olga Hayut
Ruth Pinkenson Feldman
How Can Digital Mental Health Enhance Psychiatry?
Emilie Stern
Jean-Arthur MICOULAUD FRANCHI
Jeverson Moreira
Stephane Mouchabac
Julia Maruani
Pierre Philip
Michel Lejoyeux
Pierre A. GEOFFROY
The use of digital technologies is constantly growing around the world. The wider-spread adoption of digital technologies and solutions in t… (voir plus)he daily clinical practice in psychiatry seems to be a question of when, not if. We propose a synthesis of the scientific literature on digital technologies in psychiatry and discuss the main aspects of its possible uses and interests in psychiatry according to three domains of influence that appeared to us: 1) assist and improve current care: digital psychiatry allows for more people to have access to care by simply being more accessible but also by being less stigmatized and more convenient; 2) develop new treatments: digital psychiatry allows for new treatments to be distributed via apps, and practical guidelines can reduce ethical challenges and increase the efficacy of digital tools; and 3) produce scientific and medical knowledge: digital technologies offer larger and more objective data collection, allowing for more detection and prevention of symptoms. Finally, ethical and efficacy issues remain, and some guidelines have been put forth on how to safely use these solutions and prepare for the future.
Modeling electronic health record data using a knowledge-graph-embedded topic model
Yuesong Zou
Ahmad Pesaranghader
Aman Verma
The rapid growth of electronic health record (EHR) datasets opens up promising opportunities to understand human diseases in a systematic wa… (voir plus)y. However, effective extraction of clinical knowledge from the EHR data has been hindered by its sparsity and noisy information. We present KG-ETM, an end-to-end knowledge graph-based multimodal embedded topic model. KG-ETM distills latent disease topics from EHR data by learning the embedding from the medical knowledge graphs. We applied KG-ETM to a large-scale EHR dataset consisting of over 1 million patients. We evaluated its performance based on EHR reconstruction and drug imputation. KG-ETM demonstrated superior performance over the alternative methods on both tasks. Moreover, our model learned clinically meaningful graph-informed embedding of the EHR codes. In additional, our model is also able to discover interpretable and accurate patient representations for patient stratification and drug recommendations.
Genetic correlates of phenotypic heterogeneity in autism
Varun Warrier
Xinhe Zhang
Patrick Reed
Alexandra Havdahl
Tyler M. Moore
Freddy Cliquet
Claire Leblond
Thomas Rolland
Anders Rosengren
Antonia San Jose Hannah Daisy Jessica Jessica Claire Betha Caceres Hayward Crawley Faulkner Sabet Ellis Oakle
Antonia San Jose Hannah Daisy Jessica Jessica Claire Bethany Eva Tony Declan Rosemary Jack Jessica Nicola Meng-Chuan Gwilym Amber Emily Hisham Julia Sara Ambrosino Sarai Yvonne Tabitha Miriam Alyssia Iris Maarten Anna Ver Loren Nico Sarah Larry Carsten Annika Daniel Ineke Yvette Maartje Elzbieta Elodie Kristiina Rouslan Guillaume Yang-Min Thomas Caceres
Antonia San José Cáceres
Hannah Hayward
Daisy Crawley
Jessica Faulkner
Jessica Sabet
Claire Ellis
Beth Oakley
Eva Loth
Tony Charman … (voir 67 de plus)
Declan Murphy
Rosemary Holt
Jack Waldman
Jessica Upadhyay
Nicola Gunby
Meng-Chuan Lai
Gwilym Renouf
Amber N. V. Ruigrok
Emily Taylor
Hisham Ziauddeen
Julia Deakin
Sara Ambrosino di Bruttopilo
Sarai van Dijk
Yvonne Rijks
Tabitha Koops
Miriam Douma
Alyssia Spaan
Iris Selten
Maarten Steffers
Anna Ver Loren van Themaat
Nico Bast
Sarah Baumeister
Larry O’Dwyer
Carsten Bours
Annika Rausch
Daniel von Rhein
Ineke Cornelissen
Yvette de Bruin
Maartje Graauwmans
Elzbieta Kostrzewa
Elodie Cauvet
Kristiina Tammimies
Rouslan Sitnikow
Yang-Min Kim
Thomas Bourgeron
David M. Jonas Thomas Preben Bo Ole Merete Hougaard
David M. Hougaard
Jonas Bybjerg-Grauholm
Thomas Werge
Preben Bo Mortensen
Ole Mors
Merete Nordentoft
Dwaipayan Armandina Carrie Isabelle Tracey Paula Alex Graham J. Alexander E. P. Lidia V. Tal Madeline A. Deepak P. Jonathan Adhya
Dwaipayan Armandina Carrie Isabelle Tracey Paula Alex Graham Adhya Alamanza Allison Garvey Parsons Smith Tsompa
Dwaipayan Adhya
Armandina Alamanza
Carrie Allison
Isabelle Garvey
Tracey Parsons
Paula Smith
Alex Tsompanidis
Graham J. Burton
Alexander E. P. Heazell
Lidia V. Gabis
Tal Biron-Shental
Madeline A. Lancaster
Deepak P. Srivastava
Jonathan Mill
David H. Rowitch
Matthew E. Hurles
Daniel H. Geschwind
Anders D. Børglum
Elise B. Robinson
Jakob Grove
Hilary C. Martin
Simon Baron-Cohen