Publications

Introduction to NIPS 2017 Competition Track
Sergio Escalera
Markus Weimer
Mikhail Burtsev
Valentin Malykh
Varvara Logacheva
Iulian V. Serban
Alexander Rudnicky
Alan W. Black
Shrimai Prabhumoye
Łukasz Kidziński
Sharada Prasanna Mohanty
Carmichael F. Ong
Jennifer L. Hicks
Sergey Levine
Marcel Salathé
Scott Delp
Iker Huerga
Alexander Grigorenko … (see 19 more)
Leifur Thorbergsson
Anasuya Das
Kyla Nemitz
Jenna Sandker
Stephen King
Alexander S. Ecker
Leon A. Gatys
Matthias Bethge
Jordan Boyd-Graber
Shi Feng
Pedro Rodriguez
Mohit Iyyer
He He
Hal Daumé III
Sean McGregor
Amir Banifatemi
Alexey Kurakin
Ian G Goodfellow
The First Conversational Intelligence Challenge
Mikhail Burtsev
Varvara Logacheva
Valentin Malykh
Iulian V. Serban
Shrimai Prabhumoye
Alan W. Black
Alexander Rudnicky
Combining adaptive algorithms and hypergradient method: a performance and robustness study
Nicolas Roux
Convergence Properties of Deep Neural Networks on Separable Data
Remi Tachet des Combes
Samira Shabanian
While a lot of progress has been made in recent years, the dynamics of learning in deep nonlinear neural networks remain to this day largely… (see more) misunderstood. In this work, we study the case of binary classification and prove various properties of learning in such networks under strong assumptions such as linear separability of the data. Extending existing results from the linear case, we confirm empirical observations by proving that the classification error also follows a sigmoidal shape in nonlinear architectures. We show that given proper initialization, learning expounds parallel independent modes and that certain regions of parameter space might lead to failed training. We also demonstrate that input norm and features’ frequency in the dataset lead to distinct convergence speeds which might shed some light on the generalization capabilities of deep neural networks. We provide a comparison between the dynamics of learning with cross-entropy and hinge losses, which could prove useful to understand recent progress in the training of generative adversarial networks. Finally, we identify a phenomenon that we baptize gradient starvation where the most frequent features in a dataset prevent the learning of other less frequent but equally informative features.
Deep Graph Infomax
William Fedus
William L. Hamilton
Pietro Lio
R Devon Hjelm
We present Deep Graph Infomax (DGI), a general approach for learning node representations within graph-structured data in an unsupervised ma… (see more)nner. DGI relies on maximizing mutual information between patch representations and corresponding high-level summaries of graphs---both derived using established graph convolutional network architectures. The learnt patch representations summarize subgraphs centered around nodes of interest, and can thus be reused for downstream node-wise learning tasks. In contrast to most prior approaches to unsupervised learning with GCNs, DGI does not rely on random walk objectives, and is readily applicable to both transductive and inductive learning setups. We demonstrate competitive performance on a variety of node classification benchmarks, which at times even exceeds the performance of supervised learning.
DEFactor: Differentiable Edge Factorization-based Probabilistic Graph Generation
Mohamed Ahmed
Marwin Segler
Amir Saffari
Generating novel molecules with optimal properties is a crucial step in many industries such as drug discovery. Recently, deep generative mo… (see more)dels have shown a promising way of performing de-novo molecular design. Although graph generative models are currently available they either have a graph size dependency in their number of parameters, limiting their use to only very small graphs or are formulated as a sequence of discrete actions needed to construct a graph, making the output graph non-differentiable w.r.t the model parameters, therefore preventing them to be used in scenarios such as conditional graph generation. In this work we propose a model for conditional graph generation that is computationally efficient and enables direct optimisation of the graph. We demonstrate favourable performance of our model on prototype-based molecular graph conditional generation tasks.
On Difficulties of Probability Distillation
Dopamine: A Research Framework for Deep Reinforcement Learning
Subhodeep Moitra
Carles Gelada
Saurabh Kumar
Bellemare Marc-Emmanuel
Deep reinforcement learning (deep RL) research has grown significantly in recent years. A number of software offerings now exist that provid… (see more)e stable, comprehensive implementations for benchmarking. At the same time, recent deep RL research has become more diverse in its goals. In this paper we introduce Dopamine, a new research framework for deep RL that aims to support some of that diversity. Dopamine is open-source, TensorFlow-based, and provides compact and reliable implementations of some state-of-the-art deep RL agents. We complement this offering with a taxonomy of the different research objectives in deep RL research. While by no means exhaustive, our analysis highlights the heterogeneity of research in the field, and the value of frameworks such as ours.
EnGAN: Latent Space MCMC and Maximum Entropy Generators for Energy-based Models
Knowledge Representation for Reinforcement Learning using General Value Functions
Gheorghe Comanici
Andre Barreto
Daniel Toyama
Eser Aygün
Sasha Vezhnevets
Shaobo Hou
Shibl Mourad
Learning powerful policies and better dynamics models by encouraging consistency
Learning of Sophisticated Curriculums by viewing them as Graphs over Tasks