Portrait of Sahand Rezaei-Shoshtari is unavailable

Sahand Rezaei-Shoshtari

PhD - McGill University
Supervisor
Co-supervisor
Research Topics
Reinforcement Learning

Publications

Fairness in Reinforcement Learning with Bisimulation Metrics
Ensuring long-term fairness is crucial when developing automated decision making systems, specifically in dynamic and sequential environment… (see more)s. By maximizing their reward without consideration of fairness, AI agents can introduce disparities in their treatment of groups or individuals. In this paper, we establish the connection between bisimulation metrics and group fairness in reinforcement learning. We propose a novel approach that leverages bisimulation metrics to learn reward functions and observation dynamics, ensuring that learners treat groups fairly while reflecting the original problem. We demonstrate the effectiveness of our method in addressing disparities in sequential decision making problems through empirical evaluation on a standard fairness benchmark consisting of lending and college admission scenarios.
Fairness in Reinforcement Learning with Bisimulation Metrics
Ensuring long-term fairness is crucial when developing automated decision making systems, specifically in dynamic and sequential environment… (see more)s. By maximizing their reward without consideration of fairness, AI agents can introduce disparities in their treatment of groups or individuals. In this paper, we establish the connection between bisimulation metrics and group fairness in reinforcement learning. We propose a novel approach that leverages bisimulation metrics to learn reward functions and observation dynamics, ensuring that learners treat groups fairly while reflecting the original problem. We demonstrate the effectiveness of our method in addressing disparities in sequential decision making problems through empirical evaluation on a standard fairness benchmark consisting of lending and college admission scenarios.
Policy Gradient Methods in the Presence of Symmetries and State Abstractions
Hypernetworks for Zero-shot Transfer in Reinforcement Learning
Charlotte Morissette
Francois Hogan
In this paper, hypernetworks are trained to generate behaviors across a range of unseen task conditions, via a novel TD-based training objec… (see more)tive and data from a set of near-optimal RL solutions for training tasks. This work relates to meta RL, contextual RL, and transfer learning, with a particular focus on zero-shot performance at test time, enabled by knowledge of the task parameters (also known as context). Our technical approach is based upon viewing each RL algorithm as a mapping from the MDP specifics to the near-optimal value function and policy and seek to approximate it with a hypernetwork that can generate near-optimal value functions and policies, given the parameters of the MDP. We show that, under certain conditions, this mapping can be considered as a supervised learning problem. We empirically evaluate the effectiveness of our method for zero-shot transfer to new reward and transition dynamics on a series of continuous control tasks from DeepMind Control Suite. Our method demonstrates significant improvements over baselines from multitask and meta RL approaches.
Continuous MDP Homomorphisms and Homomorphic Policy Gradient
Learning Intuitive Physics with Multimodal Generative Models
Predicting the future interaction of objects when they come into contact with their environment is key for autonomous agents to take intelli… (see more)gent and anticipatory actions. This paper presents a perception framework that fuses visual and tactile feedback to make predictions about the expected motion of objects in dynamic scenes. Visual information captures object properties such as 3D shape and location, while tactile information provides critical cues about interaction forces and resulting object motion when it makes contact with the environment. Utilizing a novel See-Through-your-Skin (STS) sensor that provides high resolution multimodal sensing of contact surfaces, our system captures both the visual appearance and the tactile properties of objects. We interpret the dual stream signals from the sensor using a Multimodal Variational Autoencoder (MVAE), allowing us to capture both modalities of contacting objects and to develop a mapping from visual to tactile interaction and vice-versa. Additionally, the perceptual system can be used to infer the outcome of future physical interactions, which we validate through simulated and real-world experiments in which the resting state of an object is predicted from given initial conditions.
Seeing Through Your Skin: A Novel Visuo-Tactile Sensor for Robotic Manipulation
Francois Hogan
M. Jenkin
Yashveer Girdhar
This work describes the development of the novel tactile sensor, named Semitransparent Tactile Sensor (STS), designed to enable reactive and… (see more) robust manipulation skills. The design, inspired from recent developments in optical tactile sensing technology, addresses a key missing features of these sensors: the ability to capture an “in the hand” perspective prior to and during the contact interaction. Whereas optical tactile sensors are typically opaque and obscure the view of the object at the critical moment prior to manipulator-object contact, we present a sensor that has the dual capabilities of acting as a tactile sensor and as a visual camera. This paper details the design and fabrication of the sensor, showcases its dual sensing capabilities, and introduces a simulated environment of the sensor within the PyBullet simulator.