Portrait of Maxime Gasse

Maxime Gasse

Associate Industry Member
Adjunct Professor, Polytechnique Montréal, Department of Computer Engineering and Software Engineering
Senior Research Scientist, ServiceNow
Research Topics
Causality
LLM Agent
Probabilistic Models
Reinforcement Learning

Biography

I am a senior research scientist at ServiceNow in Montréal, where I do research at the intersection of causal inference and reinforcement learning. I am an adjunct professor at Polytechnique Montréal (courtesy appointment) and an associate industry member of Mila – Quebec Artificial Intelligence Institute.

I am fascinated by the question of AI: can we build machines that think? I humbly believe that our attempts at designing thinking machines can be a path towards a fundamental understanding of intelligence and of ourselves. Currently, I am interested in questioning if and how ideas from the field of causality can help in the design of autonomous learning agents. I am looking for motivated interns with strong technical skills and a background in reinforcement learning and/or causality.

Current Students

PhD - Polytechnique Montréal
Co-supervisor :
Master's Research - Polytechnique Montréal
Co-supervisor :

Publications

On the Effectiveness of Two-Step Learning for Latent-Variable Models
Latent-variable generative models offer a principled solution for modeling and sampling from complex probability distributions. Implementing… (see more) a joint training objective with a complex prior, however, can be a tedious task, as one is typically required to derive and code a specific cost function for each new type of prior distribution. In this work, we propose a general framework for learning latent variable generative models in a two-step fashion. In the first step of the framework, we train an autoencoder, and in the second step we fit a prior model on the resulting latent distribution. This two-step approach offers a convenient alternative to joint training, as it allows for a straightforward combination of existing models without the hustle of deriving new cost functions, and the need for coding the joint training objectives. Through a set of experiments, we demonstrate that two-step learning results in performances similar to joint training, and in some cases even results in more accurate modeling.
On generalized surrogate duality in mixed-integer nonlinear programming
Benjamin Müller
Gonzalo Muñoz
Ambros Gleixner
Felipe Serrano
Exact Combinatorial Optimization with Graph Convolutional Neural Networks
Didier Chételat
Nicola Ferroni
Combinatorial optimization problems are typically tackled by the branch-and-bound paradigm. We propose a new graph convolutional neural netw… (see more)ork model for learning branch-and-bound variable selection policies, which leverages the natural variable-constraint bipartite graph representation of mixed-integer linear programs. We train our model via imitation learning from the strong branching expert rule, and demonstrate on a series of hard problems that our approach produces policies that improve upon state-of-the-art machine-learning methods for branching and generalize to instances significantly larger than seen during training. Moreover, we improve for the first time over expert-designed branching rules implemented in a state-of-the-art solver on large problems. Code for reproducing all the experiments can be found at this https URL.