Publications

Assessment of Extubation Readiness Using Spontaneous Breathing Trials in Extremely Preterm Neonates
Wissam Shalish
Lara Kanbar
Lajos Kovacs
Sanjay Chawla
Martin Keszler
Smita Rao
Samantha Latremouille
Karen Brown
Robert E. Kearney
Guilherme M. Sant’Anna
Importance Spontaneous breathing trials (SBTs) are used to determine extubation readiness in extremely preterm neonates (gestational age ≤… (voir plus)28 weeks), but these trials rely on empirical combinations of clinical events during endotracheal continuous positive airway pressure (ET-CPAP). Objectives To describe clinical events during ET-CPAP and to assess accuracy of comprehensive clinical event combinations in predicting successful extubation compared with clinical judgment alone. Design, Setting, and Participants This multicenter diagnostic study used data from 259 neonates seen at 5 neonatal intensive care units from the prospective Automated Prediction of Extubation Readiness (APEX) study from September 1, 2013, through August 31, 2018. Neonates with birth weight less than 1250 g who required mechanical ventilation were eligible. Neonates deemed to be ready for extubation and who underwent ET-CPAP before extubation were included. Interventions In the APEX study, cardiorespiratory signals were recorded during 5-minute ET-CPAP, and signs of clinical instability were monitored. Main Outcomes and Measures Four clinical events were documented during ET-CPAP: apnea requiring stimulation, presence and cumulative durations of bradycardia and desaturation, and increased supplemental oxygen. Clinical event occurrence was assessed and compared between extubation pass and fail (defined as reintubation within 7 days). An automated algorithm was developed to generate SBT definitions using all clinical event combinations and to compute diagnostic accuracies of an SBT in predicting extubation success. Results Of 259 neonates (139 [54%] male) with a median gestational age of 26.1 weeks (interquartile range [IQR], 24.9-27.4 weeks) and median birth weight of 830 g (IQR, 690-1019 g), 147 (57%) had at least 1 clinical event during ET-CPAP. Apneas occurred in 10% (26 of 259) of neonates, bradycardias in 19% (48), desaturations in 53% (138), and increased oxygen needs in 41% (107). Neonates with successful extubation (71% [184 of 259]) had significantly fewer clinical events (51% [93 of 184] vs 72% [54 of 75], P = .002), shorter cumulative bradycardia duration (median, 0 seconds [IQR, 0 seconds] vs 0 seconds [IQR, 0-9 seconds], P  .001), shorter cumulative desaturation duration (median, 0 seconds [IQR, 0-59 seconds] vs 25 seconds [IQR, 0-90 seconds], P = .003),
CLOSURE: Assessing Systematic Generalization of CLEVR Models
Timothy J. O'Donnell
Shikhar Murty
Philippe Beaudoin
The CLEVR dataset of natural-looking questions about 3D-rendered scenes has recently received much attention from the research community. A … (voir plus)number of models have been proposed for this task, many of which achieved very high accuracies of around 97-99%. In this work, we study how systematic the generalization of such models is, that is to which extent they are capable of handling novel combinations of known linguistic constructs. To this end, we test models' understanding of referring expressions based on matching object properties (such as e.g. "the object that is the same size as the red ball") in novel contexts. Our experiments on the thereby constructed CLOSURE benchmark show that state-of-the-art models often do not exhibit systematicity after being trained on CLEVR. Surprisingly, we find that an explicitly compositional Neural Module Network model also generalizes badly on CLOSURE, even when it has access to the ground-truth programs at test time. We improve the NMN's systematic generalization by developing a novel Vector-NMN module architecture with vector-valued inputs and outputs. Lastly, we investigate the extent to which few-shot transfer learning can help models that are pretrained on CLEVR to adapt to CLOSURE. Our few-shot learning experiments contrast the adaptation behavior of the models with intermediate discrete programs with that of the end-to-end continuous models.
Shaping representations through communication: community size effect in artificial learning systems
Olivier Tieleman
Angeliki Lazaridou
Shibl Mourad
Charles Blundell
Motivated by theories of language and communication that explain why communities with large numbers of speakers have, on average, simpler la… (voir plus)nguages with more regularity, we cast the representation learning problem in terms of learning to communicate. Our starting point sees the traditional autoencoder setup as a single encoder with a fixed decoder partner that must learn to communicate. Generalizing from there, we introduce community-based autoencoders in which multiple encoders and decoders collectively learn representations by being randomly paired up on successive training iterations. We find that increasing community sizes reduce idiosyncrasies in the learned codes, resulting in representations that better encode concept categories and correlate with human feature norms.
Entropy Regularization with Discounted Future State Distribution in Policy Gradient Methods
The policy gradient theorem is defined based on an objective with respect to the initial distribution over states. In the discounted case, t… (voir plus)his results in policies that are optimal for one distribution over initial states, but may not be uniformly optimal for others, no matter where the agent starts from. Furthermore, to obtain unbiased gradient estimates, the starting point of the policy gradient estimator requires sampling states from a normalized discounted weighting of states. However, the difficulty of estimating the normalized discounted weighting of states, or the stationary state distribution, is quite well-known. Additionally, the large sample complexity of policy gradient methods is often attributed to insufficient exploration, and to remedy this, it is often assumed that the restart distribution provides sufficient exploration in these algorithms. In this work, we propose exploration in policy gradient methods based on maximizing entropy of the discounted future state distribution. The key contribution of our work includes providing a practically feasible algorithm to estimate the normalized discounted weighting of states, i.e, the \textit{discounted future state distribution}. We propose that exploration can be achieved by entropy regularization with the discounted state distribution in policy gradients, where a metric for maximal coverage of the state space can be based on the entropy of the induced state distribution. The proposed approach can be considered as a three time-scale algorithm and under some mild technical conditions, we prove its convergence to a locally optimal policy. Experimentally, we demonstrate usefulness of regularization with the discounted future state distribution in terms of increased state space coverage and faster learning on a range of complex tasks.
Marginalized State Distribution Entropy Regularization in Policy Optimization
Doubly Robust Off-Policy Actor-Critic Algorithms for Reinforcement Learning
We study the problem of off-policy critic evaluation in several variants of value-based off-policy actor-critic algorithms. Off-policy actor… (voir plus)-critic algorithms require an off-policy critic evaluation step, to estimate the value of the new policy after every policy gradient update. Despite enormous success of off-policy policy gradients on control tasks, existing general methods suffer from high variance and instability, partly because the policy improvement depends on gradient of the estimated value function. In this work, we present a new way of off-policy policy evaluation in actor-critic, based on the doubly robust estimators. We extend the doubly robust estimator from off-policy policy evaluation (OPE) to actor-critic algorithms that consist of a reward estimator performance model. We find that doubly robust estimation of the critic can significantly improve performance in continuous control tasks. Furthermore, in cases where the reward function is stochastic that can lead to high variance, doubly robust critic estimation can improve performance under corrupted, stochastic reward signals, indicating its usefulness for robust and safe reinforcement learning.
Interactive Psychometrics for Autism with the Human Dynamic Clamp: Interpersonal Synchrony from Sensory-motor to Socio-cognitive Domains
Florence Baillin
Aline Lefebvre
Amandine Pedoux
Yann Beauxis
Denis-Alexander Engemann
Anna Maruani
Frederique Amsellem
Thomas Bourgeron
Richard Delorme
Mutations associated with neuropsychiatric conditions delineate functional brain connectivity dimensions contributing to autism and schizophrenia
Clara A. Moreau
Sebastian G. W. Urchs
Pierre Orban
Catherine Schramm
Aurélie Labbe
Elise Douard
Pierre-Olivier Quirion
Amy Lin
Leila Kushan
Stephanie Grot
David Luck
Adrianna Mendrek
Stephane Potvin
Emmanuel Stip
Thomas Bourgeron
Alan C. Evans
Carrie E. Bearden
Pierre Bellec … (voir 1 de plus)
Sébastien Jacquemont
16p11.2 and 22q11.2 Copy Number Variants (CNVs) confer high risk for Autism Spectrum Disorder (ASD), schizophrenia (SZ), and Attention-Defic… (voir plus)it-Hyperactivity-Disorder (ADHD), but their impact on functional connectivity (FC) remains unclear. Here we report an analysis of resting-state FC using magnetic resonance imaging data from 101 CNV carriers, 755 individuals with idiopathic ASD, SZ, or ADHD and 1,072 controls. We characterize CNV FC-signatures and use them to identify dimensions contributing to complex idiopathic conditions. CNVs have large mirror effects on FC at the global and regional level. Thalamus, somatomotor, and posterior insula regions play a critical role in dysconnectivity shared across deletions, duplications, idiopathic ASD, SZ but not ADHD. Individuals with higher similarity to deletion FC-signatures exhibit worse cognitive and behavioral symptoms. Deletion similarities identified at the connectivity level could be related to the redundant associations observed genome-wide between gene expression spatial patterns and FC-signatures. Results may explain why many CNVs affect a similar range of neuropsychiatric symptoms.
Applying Knowledge Transfer for Water Body Segmentation in Peru
Jessenia Gonzalez
César Beltrán
Detecting GAN generated errors
Xiru Zhu
Tianzi Yang
Tzuyang Yu
Despite an impressive performance from the latest GAN for generating hyper-realistic images, GAN discriminators have difficulty evaluating t… (voir plus)he quality of an individual generated sample. This is because the task of evaluating the quality of a generated image differs from deciding if an image is real or fake. A generated image could be perfect except in a single area but still be detected as fake. Instead, we propose a novel approach for detecting where errors occur within a generated image. By collaging real images with generated images, we compute for each pixel, whether it belongs to the real distribution or generated distribution. Furthermore, we leverage attention to model long-range dependency; this allows detection of errors which are reasonable locally but not holistically. For evaluation, we show that our error detection can act as a quality metric for an individual image, unlike FID and IS. We leverage Improved Wasserstein, BigGAN, and StyleGAN to show a ranking based on our metric correlates impressively with FID scores. Our work opens the door for better understanding of GAN and the ability to select the best samples from a GAN model.
Approximate information state for partially observed systems
Jayakumar Subramanian
The standard approach for modeling partially observed systems is to model them as partially observable Markov decision processes (POMDPs) an… (voir plus)d obtain a dynamic program in terms of a belief state. The belief state formulation works well for planning but is not ideal for online reinforcement learning because the belief state depends on the model and, as such, is not observable when the model is unknown.In this paper, we present an alternative notion of an information state for obtaining a dynamic program in partially observed models. In particular, an information state is a sufficient statistic for the current reward which evolves in a controlled Markov manner. We show that such an information state leads to a dynamic programming decomposition. Then we present a notion of an approximate information state and present an approximate dynamic program based on the approximate information state. Approximate information state is defined in terms of properties that can be estimated using sampled trajectories. Therefore, they provide a constructive method for reinforcement learning in partially observed systems. We present one such construction and show that it performs better than the state of the art for three benchmark models.
Artificial Intelligence Based Cloud Distributor (AI-CD): Probing Low Cloud Distribution with Generative Adversarial Neural Networks
T. Yuan
H. Song
David Hall