Portrait de Maxime Gasse

Maxime Gasse

Membre industriel associé
Professeur associé, Polytechnique Montréal, Département de génie informatique et génie logiciel
Chercheur scientifique principal, ServiceNow
Sujets de recherche
Agent basé sur un LLM
Apprentissage par renforcement
Causalité
Modèles probabilistes

Biographie

Je suis chercheur principal chez ServiceNow à Montréal, où je fais de la recherche à l'intersection de l'inférence causale et de l'apprentissage par renforcement. Je suis professeur adjoint à Polytechnique Montréal et membre associé de Mila – Institut québécois d’intelligence artificielle.

Je suis fasciné par la question de l'intelligence artificielle : pouvons-nous construire des machines qui pensent? Je crois humblement que nos tentatives de concevoir des machines pensantes peuvent être un chemin vers une compréhension fondamentale de l'intelligence et de nous-mêmes. Actuellement, je m'intéresse à la question consistant à savoir si et comment les idées du domaine de la causalité peuvent contribuer à la conception d'agents d'apprentissage autonomes. Je suis à la recherche de stagiaires motivé·e·s, doté·e·s de solides compétences techniques et d'une expérience dans l'apprentissage par renforcement et/ou la causalité.

Étudiants actuels

Doctorat - Polytechnique
Co-superviseur⋅e :
Maîtrise recherche - Polytechnique
Co-superviseur⋅e :

Publications

On the Effectiveness of Two-Step Learning for Latent-Variable Models
Latent-variable generative models offer a principled solution for modeling and sampling from complex probability distributions. Implementing… (voir plus) a joint training objective with a complex prior, however, can be a tedious task, as one is typically required to derive and code a specific cost function for each new type of prior distribution. In this work, we propose a general framework for learning latent variable generative models in a two-step fashion. In the first step of the framework, we train an autoencoder, and in the second step we fit a prior model on the resulting latent distribution. This two-step approach offers a convenient alternative to joint training, as it allows for a straightforward combination of existing models without the hustle of deriving new cost functions, and the need for coding the joint training objectives. Through a set of experiments, we demonstrate that two-step learning results in performances similar to joint training, and in some cases even results in more accurate modeling.
On generalized surrogate duality in mixed-integer nonlinear programming
Benjamin Müller
Gonzalo Muñoz
Ambros Gleixner
Felipe Serrano
Exact Combinatorial Optimization with Graph Convolutional Neural Networks
Didier Chételat
Nicola Ferroni
Combinatorial optimization problems are typically tackled by the branch-and-bound paradigm. We propose a new graph convolutional neural netw… (voir plus)ork model for learning branch-and-bound variable selection policies, which leverages the natural variable-constraint bipartite graph representation of mixed-integer linear programs. We train our model via imitation learning from the strong branching expert rule, and demonstrate on a series of hard problems that our approach produces policies that improve upon state-of-the-art machine-learning methods for branching and generalize to instances significantly larger than seen during training. Moreover, we improve for the first time over expert-designed branching rules implemented in a state-of-the-art solver on large problems. Code for reproducing all the experiments can be found at this https URL.