Portrait de Amir-massoud Farahmand

Amir-massoud Farahmand

Membre académique principal
Professeur associé, Polytechnique Montréal
University of Toronto
Sujets de recherche
Apprentissage par renforcement
Apprentissage profond
Raisonnement
Théorie de l'apprentissage automatique

Biographie

Amir-massoud Farahmand est professeur associé au Département de génie informatique et logiciel de Polytechnique Montréal et membre académique principal à Mila - Institut québécois d'intelligence artificielle, ainsi que professeur associé (statut uniquement) au Département d'informatique de l'Université de Toronto. Il a été chercheur scientifique et titulaire de la chaire CIFAR AI au Vector Institute de Toronto entre 2018-2024, et chercheur principal aux Mitsubishi Electric Research Laboratories (MERL) à Cambridge, aux États-Unis, entre 2014-2018. Il a obtenu son doctorat à l'Université de l'Alberta en 2011, suivi de bourses postdoctorales à l'Université McGill (2011-2014) et à l'Université Carnegie Mellon (CMU) (2014).

La vision de recherche d'Amir-massoud est de comprendre les mécanismes informatiques et statistiques nécessaires pour concevoir des agents d'intelligence artificielle efficaces qui interagissent avec leur environnement et améliorent de manière adaptative leur performance à long terme. Il a de l'expérience dans le développement de méthodes d'apprentissage par renforcement et d'apprentissage automatique pour résoudre des problèmes industriels.

Étudiants actuels

Collaborateur·rice de recherche - McGill University
Collaborateur·rice de recherche - University of Toronto

Publications

Dissecting Deep RL with High Update Ratios: Combatting Value Overestimation and Divergence
Marcel Hussing
Claas Voelcker
Igor Gilitschenski
Eric R. Eaton
We show that deep reinforcement learning can maintain its ability to learn without resetting network parameters in settings where the number… (voir plus) of gradient updates greatly exceeds the number of environment samples. Under such large update-to-data ratios, a recent study by Nikishin et al. (2022) suggested the emergence of a primacy bias , in which agents overfit early interactions and downplay later experience, impairing their ability to learn. In this work, we dissect the phenomena underlying the primacy bias. We inspect the early stages of training that ought to cause the failure to learn and find that a fundamental challenge is a long-standing acquaintance: value overestimation. Overinflated Q-values are found not only on out-of-distribution but also in-distribution data and can be traced to unseen action prediction propelled by optimizer momentum. We employ a simple unit-ball normalization that enables learning under large update ratios, show its efficacy on the widely used dm_control suite, and obtain strong performance on the challenging dog tasks, competitive with model-based approaches. Our results question, in parts, the prior explanation for sub-optimal learning due to overfitting on early data.
Dissecting Deep RL with High Update Ratios: Combatting Value Divergence.
Marcel Hussing
Claas Voelcker
Igor Gilitschenski
Eric R. Eaton
PID Accelerated Temporal Difference Algorithms
Mark Bedaywi
Amin Rakhsha
Long-horizon tasks, which have a large discount factor, pose a challenge for most conventional reinforcement learning (RL) algorithms. Algor… (voir plus)ithms such as Value Iteration and Temporal Difference (TD) learning have a slow convergence rate and become inefficient in these tasks. When the transition distributions are given, PID VI was recently introduced to accelerate the convergence of Value Iteration using ideas from control theory. Inspired by this, we introduce PID TD Learning and PID Q-Learning algorithms for the RL setting, in which only samples from the environment are available. We give a theoretical analysis of the convergence of PID TD Learning and its acceleration compared to the conventional TD Learning. We also introduce a method for adapting PID gains in the presence of noise and empirically verify its effectiveness.