Portrait de Hanna Yurchyk n'est pas disponible

Hanna Yurchyk

Doctorat - McGill
Superviseur⋅e principal⋅e
Sujets de recherche
Apprentissage de représentations
Apprentissage par renforcement
Modèles probabilistes
Robotique
Systèmes dynamiques
Vision par ordinateur

Publications

Large Pre-Trained Models for Bimanual Manipulation in 3D
We investigate the integration of attention maps from a pre-trained Vision Transformer into voxel representations to enhance bimanual roboti… (voir plus)c manipulation. Specifically, we extract attention maps from DINOv2, a self-supervised ViT model, and interpret them as pixel-level saliency scores over RGB images. These maps are lifted into a 3D voxel grid, resulting in voxel-level semantic cues that are incorporated into a behavior cloning policy. When integrated into a state-of-the-art voxel-based policy, our attention-guided featurization yields an average absolute improvement of 8.2% and a relative gain of 21.9% across all tasks in the RLBench bimanual benchmark.
Fairness in Reinforcement Learning with Bisimulation Metrics
Ensuring long-term fairness is crucial when developing automated decision making systems, specifically in dynamic and sequential environment… (voir plus)s. By maximizing their reward without consideration of fairness, AI agents can introduce disparities in their treatment of groups or individuals. In this paper, we establish the connection between bisimulation metrics and group fairness in reinforcement learning. We propose a novel approach that leverages bisimulation metrics to learn reward functions and observation dynamics, ensuring that learners treat groups fairly while reflecting the original problem. We demonstrate the effectiveness of our method in addressing disparities in sequential decision making problems through empirical evaluation on a standard fairness benchmark consisting of lending and college admission scenarios.
Fairness in Reinforcement Learning with Bisimulation Metrics
Ensuring long-term fairness is crucial when developing automated decision making systems, specifically in dynamic and sequential environment… (voir plus)s. By maximizing their reward without consideration of fairness, AI agents can introduce disparities in their treatment of groups or individuals. In this paper, we establish the connection between bisimulation metrics and group fairness in reinforcement learning. We propose a novel approach that leverages bisimulation metrics to learn reward functions and observation dynamics, ensuring that learners treat groups fairly while reflecting the original problem. We demonstrate the effectiveness of our method in addressing disparities in sequential decision making problems through empirical evaluation on a standard fairness benchmark consisting of lending and college admission scenarios.