I am under the supervision of Aaron Courville, and co-supervised by Laurent Charlin. My research is mostly about representation learning via Deep Latent Variable models (also known as the Variational Autoencoders) and efficient approximate inference. My recent focus is about improving expressivity in doing variational inference (see our ICML18 paper NAF!), the optimization process of inference, and understanding the training dynamics of generative models in general. I am also interested in meta learning, natural language understanding and reinforcement learning.