Boundary Seeking GANs
Athul Jacob
Adam Trischler
Gerry Che
Kyunghyun Cho
Combining Model-based and Model-free RL via Multi-step Control Variates
Tong Che
Yuchen Lu
George Tucker
Surya Bhupatiraju
Shane Gu
Sergey Levine
Existence of Nash Equilibria on Integer Programming Games
Andrea Lodi
João Pedro Pedroso
Learning Generative Models with Locally Disentangled Latent Factors
One of the most successful techniques in generative models has been decomposing a complicated generation task into a series of simpler gener… (see more)ation tasks. For example, generating an image at a low resolution and then learning to refine that into a high resolution image often improves results substantially. Here we explore a novel strategy for decomposing generation for complicated objects in which we first generate latent variables which describe a subset of the observed variables, and then map from these latent variables to the observed space. We show that this allows us to achieve decoupled training of complicated generative models and present both theoretical and experimental results supporting the benefit of such an approach.
Online Hyper-Parameter Optimization
Damien Vincent
Sylvain Gelly
Olivier Bousquet
Finding Flatter Minima with SGD
Stanisław Jastrzębski
Zac Kenton
Devansh Arpit
Nicolas Ballas
Asja Fischer
Amos Storkey
Graph Priors for Deep Neural Networks
Francis Dutil
Joseph Paul Cohen
Martin Weiss
Georgy Derevyanko
In this work we explore how gene-gene interaction graphs can be used as a prior for the representation of a model to construct features base… (see more)d on known interactions between genes. Most existing machine learning work on graphs focuses on building models when data is confined to a graph structure. In this work we focus on using the information from a graph to build better representations in our models. We use the percolate task, determining if a path exists across a grid for a set of node values, as a proxy for gene pathways. We create variants of the percolate task to explore where existing methods fail. We test the limits of existing methods in order to determine what can be improved when applying these methods to a real task. This leads us to propose new methods based on Graph Convolutional Networks (GCN) that use pooling and dropout to deal with noise in the graph prior.
Inferring Identity Factors for Grouped Examples
We propose a method for modelling groups of face images from the same identity. The model is trained to infer a distribution over the latent… (see more) space for identity given a small set of “training data”. One can then sample images using that latent representation to produce images of the same identity. We demonstrate that the model extracts disentangled factors for identity factors and image-specific vectors. We also perform generative classification over identities to assess its feasibility for few-shot face recognition.
Online variance-reducing optimization
Reza Babanezhad Harikandeh
Reza Babanezhad
Pierre-Antoine Manzagol
SGD S MOOTHS THE S HARPEST D IRECTIONS
Stanisław Jastrzębski
Zac Kenton
Nicolas Ballas
Asja Fischer
Amos Storkey
Stochastic gradient descent (SGD) is able to find regions that generalize well, even in drastically over-parametrized models such as deep ne… (see more)ural networks. We observe that noise in SGD controls the spectral norm and conditioning of the Hessian throughout the training. We hypothesize the cause of this phenomenon is due to the dynamics of neurons saturating their non-linearity along the largest curvature directions, thus leading to improved conditioning.
Extending the Framework of Equilibrium Propagation to General Dynamics
Benjamin Scellier
Anirudh Goyal
Jonathan Binas
Thomas Mesnard
Hierarchical Adversarially Learned Inference
Ishmael Belghazi
Sai Rajeswar
Olivier Mastropietro
Jovana Mitrovic
We propose a novel hierarchical generative model with a simple Markovian structure and a corresponding inference model. Both the generative … (see more)and inference model are trained using the adversarial learning paradigm. We demonstrate that the hierarchical structure supports the learning of progressively more abstract representations as well as providing semantically meaningful reconstructions with different levels of fidelity. Furthermore, we show that minimizing the Jensen-Shanon divergence between the generative and inference network is enough to minimize the reconstruction error. The resulting semantically meaningful hierarchical latent structure discovery is exemplified on the CelebA dataset. There, we show that the features learned by our model in an unsupervised way outperform the best handcrafted features. Furthermore, the extracted features remain competitive when compared to several recent deep supervised approaches on an attribute prediction task on CelebA. Finally, we leverage the model's inference network to achieve state-of-the-art performance on a semi-supervised variant of the MNIST digit classification task.