Université de Montréal Balancing Signals for Semi-Supervised Sequence Learning
Training recurrent neural networks (RNNs) on long sequences using backpropagation through time (BPTT) remains a fundamental challenge. It ha… (see more)s been shown that adding a local unsupervised loss term into the optimization objective makes the training of RNNs on long sequences more effective. While the importance of an unsupervised task can in principle be controlled by a coefficient in the objective function, the gradients with respect to the unsupervised loss term still influence all the hidden state dimensions, which might cause important information about the supervised task to be degraded or erased. Compared to existing semi-supervised sequence learning methods, this thesis focuses upon a traditionally overlooked mechanism – an architecture with explicitly designed private and shared hidden units designed to mitigate the detrimental influence of the auxiliary unsupervised loss over the main supervised task. We achieve this by dividing the RNN hidden space into a private space for the supervised task or a shared space for both the supervised and unsupervised tasks. We present extensive experiments with the proposed framework on several long sequence modeling benchmark datasets. Results indicate that the proposed framework can yield performance gains in RNN models where long term dependencies are notoriously challenging to deal with.
Untangling tradeoffs between recurrence and self-attention in artificial neural networks
Giancarlo Kerg
Bhargav Kanuparthi
Anirudh Goyal
Kyle Goyette
S UPPLEMENTARY M ATERIAL - L EARNING T O N AVIGATE T HE S YNTHETICALLY A CCESSIBLE C HEMICAL S PACE U SING R EINFORCEMENT L EARNING
Sai Krishna
Gottipati
B. Sattarov
Sufeng Niu
Yashaswi Pathak
Haoran Wei
Shengchao Liu
Karam M. J. Thomas
Simon R. Blackburn
Connor Wilson. Coley
While updating the critic network, we multiply the normal random noise vector with policy noise of 0.2 and then clip it in the range -0.2 to… (see more) 0.2. This clipped policy noise is added to the action at the next time step a′ computed by the target actor networks f and π. The actor networks (f and π networks), target critic and target actor networks are updated once every two updates to the critic network.
On Variational Learning of Controllable Representations for Text without Supervision
Peng Xu
Yanshuai Cao
The variational autoencoder (VAE) can learn the manifold of natural images on certain datasets, as evidenced by meaningful interpolating or … (see more)extrapolating in the continuous latent space. However, on discrete data such as text, it is unclear if unsupervised learning can discover similar latent space that allows controllable manipulation. In this work, we find that sequence VAEs trained on text fail to properly decode when the latent codes are manipulated, because the modified codes often land in holes or vacant regions in the aggregated posterior latent space, where the decoding network fails to generalize. Both as a validation of the explanation and as a fix to the problem, we propose to constrain the posterior mean to a learned probability simplex, and performs manipulation within this simplex. Our proposed method mitigates the latent vacancy problem and achieves the first success in unsupervised learning of controllable representations for text. Empirically, our method outperforms unsupervised baselines and strong supervised approaches on text style transfer, and is capable of performing more flexible fine-grained control over text generation than existing methods.
You could have said that instead: Improving Chatbots with Natural Language Feedback
Makesh Narsimhan Sreedhar
Kun Ni
The ubiquitous nature of dialogue systems and their interaction with users generate an enormous amount of data. Can we improve chatbots usin… (see more)g this data? A self-feeding chatbot improves itself by asking natural language feedback when a user is dissatisfied with its response and uses this feedback as an additional training sample. However, user feedback in most cases contains extraneous sequences hindering their usefulness as a training sample. In this work, we propose a generative adversarial model that converts noisy feedback into a plausible natural response in a conversation. The generator’s goal is to convert the feedback into a response that answers the user’s previous utterance and to fool the discriminator which distinguishes feedback from natural responses. We show that augmenting original training data with these modified feedback responses improves the original chatbot performance from 69.94%to 75.96% in ranking correct responses on the PERSONACHATdataset, a large improvement given that the original model is already trained on 131k samples.
Your GAN is Secretly an Energy-based Model and You Should use Discriminator Driven Latent Sampling
Tong Che
Ruixiang ZHANG
Jascha Sohl-Dickstein
Yuan Cao
We show that the sum of the implicit generator log-density …
Learning from Learning Machines: Optimisation, Rules, and Social Norms
Travis LaCroix
There is an analogy between machine learning systems and economic entities in that they are both adaptive, and their behaviour is specified … (see more)in a more-or-less explicit way. It appears that the area of AI that is most analogous to the behaviour of economic entities is that of morally good decision-making, but it is an open question as to how precisely moral behaviour can be achieved in an AI system. This paper explores the analogy between these two complex systems, and we suggest that a clearer understanding of this apparent analogy may help us forward in both the socio-economic domain and the AI domain: known results in economics may help inform feasible solutions in AI safety, but also known results in AI may inform economic policy. If this claim is correct, then the recent successes of deep learning for AI suggest that more implicit specifications work better than explicit ones for solving such problems.
CLOSURE: Assessing Systematic Generalization of CLEVR Models
Harm de Vries
Shikhar Murty
Philippe Beaudoin
Interactive Psychometrics for Autism with the Human Dynamic Clamp: Interpersonal Synchrony from Sensory-motor to Socio-cognitive Domains
Florence Baillin
Aline Lefebvre
Amandine Pedoux
Yann Beauxis
Denis-Alexander Engemann
Anna Maruani
Frederique Amsellem
Thomas Bourgeron
Richard Delorme
Neuropsychiatric mutations delineate functional brain connectivity dimensions contributing to autism and schizophrenia
Clara A. Moreau
Sebastian Urchs
Pierre Orban
Catherine Schramm
Aurélie Labbe
Guillaume Huguet
Elise Douard
Pierre-Olivier Quirion
Amy Lin
Leila Kushan
Stephanie Grot
David Luck
Adrianna Mendrek
Stephane Potvin
Emmanuel Stip
Thomas Bourgeron
Alan C. Evans
Carrie E. Bearden
Sébastien Jacquemont
16p11.2 and 22q11.2 Copy Number Variants (CNVs) confer high risk for Autism Spectrum Disorder (ASD), schizophrenia (SZ), and Attention-Defic… (see more)it-Hyperactivity-Disorder (ADHD), but their impact on functional connectivity (FC) remains unclear. We analyzed resting-state functional magnetic resonance imaging data from 101 CNV carriers, 755 individuals with idiopathic ASD, SZ, or ADHD and 1,072 controls. We used CNV FC-signatures to identify dimensions contributing to complex idiopathic conditions. CNVs had large mirror effects on FC at the global and regional level. Thalamus, somatomotor, and posterior insula regions played a critical role in dysconnectivity shared across deletions, duplications, idiopathic ASD, SZ but not ADHD. Individuals with higher similarity to deletion FC-signatures exhibited worse cognitive and behavioral symptoms. Deletion similarities identified at the connectivity level could be related to the redundant associations observed genome-wide between gene expression spatial patterns and FC-signatures. Results may explain why many CNVs affect a similar range of neuropsychiatric symptoms.
Applying Knowledge Transfer for Water Body Segmentation in Peru
Jessenia Gonzalez
Debjani Bhowmick
César Beltrán
Kris Sankaran
Approximate information state for partially observed systems
Jayakumar Subramanian
The standard approach for modeling partially observed systems is to model them as partially observable Markov decision processes (POMDPs) an… (see more)d obtain a dynamic program in terms of a belief state. The belief state formulation works well for planning but is not ideal for online reinforcement learning because the belief state depends on the model and, as such, is not observable when the model is unknown.In this paper, we present an alternative notion of an information state for obtaining a dynamic program in partially observed models. In particular, an information state is a sufficient statistic for the current reward which evolves in a controlled Markov manner. We show that such an information state leads to a dynamic programming decomposition. Then we present a notion of an approximate information state and present an approximate dynamic program based on the approximate information state. Approximate information state is defined in terms of properties that can be estimated using sampled trajectories. Therefore, they provide a constructive method for reinforcement learning in partially observed systems. We present one such construction and show that it performs better than the state of the art for three benchmark models.