Portrait of Yann Bouteiller

Yann Bouteiller

Collaborating researcher - Polytechnique Montréal Montreal
Supervisor
Co-supervisor
Research Topics
Computational Neuroscience
Computer Vision
Deep Learning
Dynamical Systems
Machine Learning Theory
Reinforcement Learning

Publications

Sociodynamics of Reinforcement Learning
Reinforcement Learning (RL) has emerged as a core algorithmic paradigm explicitly driving innovation in a growing number of industrial appli… (see more)cations, including large language models and quantitative finance. Furthermore, computational neuroscience has long found evidence of natural forms of RL in biological brains. Therefore, it is crucial for the study of social dynamics to develop a scientific understanding of how RL shapes population behaviors. We leverage the framework of Evolutionary Game Theory (EGT) to provide building blocks and insights toward this objective. We propose a methodology that enables simulating large populations of RL agents in simple game theoretic interaction models. More specifically, we derive fast and parallelizable implementations of two fundamental revision protocols from multi-agent RL - Policy Gradient (PG) and Opponent-Learning Awareness (LOLA) - tailored for population simulations of random pairwise interactions in stateless normal-form games. Our methodology enables us to simulate large populations of 200,000 independent co-learning agents, yielding compelling insights into how non-stationarity-aware learners affect social dynamics. In particular, we find that LOLA learners promote cooperation in the Stag Hunt model, delay cooperative outcomes in the Hawk-Dove model, and reduce strategy diversity in the Rock-Paper-Scissors model.
From the Lab to the Theater: An Unconventional Field Robotics Journey
Ali Imran
Vivek Shankar Vardharajan
Rafael Gomes Braga
Abdalwhab Abdalwhab
Matthis Di-Giacomo
Alexandra Mercader
David St-Onge
Reinforcement Learning with Random Delays
Simon Ramstedt
Christopher Pal
Action and observation delays commonly occur in many Reinforcement Learning applications, such as remote control scenarios. We study the ana… (see more)tomy of randomly delayed environments, and show that partially resampling trajectory fragments in hindsight allows for off-policy multi-step value estimation. We apply this principle to derive Delay-Correcting Actor-Critic (DCAC), an algorithm based on Soft Actor-Critic with significantly better performance in environments with delays. This is shown theoretically and also demonstrated practically on a delay-augmented version of the MuJoCo continuous control benchmark.