Portrait of Amir-massoud Farahmand

Amir-massoud Farahmand

Core Academic Member
Associate Professor, Polytechnique Montréal
University of Toronto
Research Topics
Deep Learning
Machine Learning Theory
Reasoning
Reinforcement Learning

Biography

Amir-massoud Farahmand is an associate professor at the Department of Computer and Software Engineering, Polytechnique Montréal and a core academic member at Mila - Quebec Artificial Intelligence Institute, as well as an associate professor (status-only) at the Department of Computer Science, University of Toronto. He was a research scientist and CIFAR AI Chair at the Vector Institute in Toronto between 2018–2024, and principal research scientist at Mitsubishi Electric Research Laboratories (MERL) in Cambridge, USA between 2014-2018. He received his PhD from the University of Alberta in 2011, followed by postdoctoral fellowships at McGill University (2011–2014) and Carnegie Mellon University (CMU) (2014).

Amir-massoud’s research vision is to understand the computational and statistical mechanisms required to design efficient AI agents that interact with their environment and adaptively improve their long-term performance. He has experience in developing Reinforcement Learning and Machine Learning methods to solve industrially-motivated problems as well.

Current Students

Collaborating researcher - McGill University University
Collaborating researcher - University of Toronto
Collaborating researcher - Polytechnique Montréal
Master's Research - Polytechnique Montréal

Publications

Dissecting Deep RL with High Update Ratios: Combatting Value Divergence.
Marcel Hussing
Claas Voelcker
Igor Gilitschenski
Eric R. Eaton
PID Accelerated Temporal Difference Algorithms
Mark Bedaywi
Amin Rakhsha
Long-horizon tasks, which have a large discount factor, pose a challenge for most conventional reinforcement learning (RL) algorithms. Algor… (see more)ithms such as Value Iteration and Temporal Difference (TD) learning have a slow convergence rate and become inefficient in these tasks. When the transition distributions are given, PID VI was recently introduced to accelerate the convergence of Value Iteration using ideas from control theory. Inspired by this, we introduce PID TD Learning and PID Q-Learning algorithms for the RL setting, in which only samples from the environment are available. We give a theoretical analysis of the convergence of PID TD Learning and its acceleration compared to the conventional TD Learning. We also introduce a method for adapting PID gains in the presence of noise and empirically verify its effectiveness.
When does Self-Prediction help? Understanding Auxiliary Tasks in Reinforcement Learning
Claas Voelcker
Igor Gilitschenski
We investigate the impact of auxiliary learning tasks such as observation reconstruction and latent self-prediction on the representation le… (see more)arning problem in reinforcement learning. We also study how they interact with distractions and observation functions in the MDP. We provide a theoretical analysis of the learning dynamics of observation reconstruction, latent self-prediction, and TD learning in the presence of distractions and observation functions under linear model assumptions. With this formalization, we are able to explain why latent-self prediction is a helpful \emph{auxiliary task}, while observation reconstruction can provide more useful features when used in isolation. Our empirical analysis shows that the insights obtained from our learning dynamics framework predicts the behavior of these loss functions beyond the linear model assumption in non-linear neural networks. This reinforces the usefulness of the linear model framework not only for theoretical analysis, but also practical benefit for applied problems.