Portrait de Khimya Khetarpal

Khimya Khetarpal

Membre affilié
Chercheuse scientifique, Google DeepMind
Sujets de recherche
Apprentissage de représentations
Apprentissage en ligne
Apprentissage par renforcement
Théorie de l'apprentissage automatique

Biographie

Khimya Khetarpal est chercheuse chez Google Deepmind. Elle a obtenu son doctorat en informatique au Reasoning and Learning Lab de l'Université McGill et à Mila, csupervisée par Doina Precup. Elle s'intéresse de manière générale à l'intelligence artificielle et à l'apprentissage par renforcement. Ses recherches actuelles portent sur la manière dont les agents RL apprennent à représenter efficacement les connaissances du monde, à planifier avec elles et à s'adapter aux changements au fil du temps. Les travaux de Khimya ont été publiés dans les principales revues et conférences sur l'intelligence artificielle, notamment NeurIPS, ICML, AAAI, AISTATS, ICLR, The Knowledge Engineering Review, ACM, JAIR et TMLR. Ses travaux ont également été présentés dans la MIT Technology Review. Elle a été reconnue comme examinatrice experte de TMLR en 2023, l'une des étoiles montantes d'EECS 2020, finaliste du concours Three Minute Thesis (3MT) d'AAAI 2019, sélectionnée pour le consortium doctoral d'AAAI 2019, et a reçu le prix du meilleur article (3e prix) pour un atelier ICML 2018 sur l'apprentissage tout au long de la vie. Tout au long de sa carrière, elle s'est efforcée d'être une mentore active par le biais d'initiatives telles que la cofondation de l'initiative de conseil par les pairs Mila, l'enseignement et l'assistance au AI4Good Lab, le bénévolat à Skype A Scientist et le mentorat à FIRST Robotics.

Ses recherches visent à (1) comprendre le comportement intelligent qui fait le lien entre l'action et la perception en s'appuyant sur les fondements théoriques de l'apprentissage par renforcement, et (2) construire des agents d'intelligence artificielle pour représenter efficacement la connaissance du monde, planifier avec elle et s'adapter aux changements au fil du temps grâce à l'apprentissage et à l'interaction.

Elle aborde actuellement ces questions dans les directions de recherche suivantes :

- Attention sélective pour une adaptation et une robustesse rapides

- Apprentissage des abstractions et des affordances

- Découverte et apprentissage par renforcement continu

Étudiants actuels

Doctorat - UdeM
Superviseur⋅e principal⋅e :

Publications

Self-Supervised Attention-Aware Reinforcement Learning
Visual saliency has emerged as a major visualization tool for interpreting deep reinforcement learning (RL) agents. However, much of the exi… (voir plus)sting research uses it as an analyzing tool rather than an inductive bias for policy learning. In this work, we use visual attention as an inductive bias for RL agents. We propose a novel self-supervised attention learning approach which can 1. learn to select regions of interest without explicit annotations, and 2. act as a plug for existing deep RL methods to improve the learning performance. We empirically show that the self-supervised attention-aware deep RL methods outperform the baselines in the context of both the rate of convergence and performance. Furthermore, the proposed self-supervised attention is not tied with specific policies, nor restricted to a specific scene. We posit that the proposed approach is a general self-supervised attention module for multi-task learning and transfer learning, and empirically validate the generalization ability of the proposed method. Finally, we show that our method learns meaningful object keypoints highlighting improvements both qualitatively and quantitatively.
Variance Penalized On-Policy and Off-Policy Actor-Critic
Arushi Jain
Gandharv Patil
Ayush Jain
Safe option-critic: learning safety in the option-critic architecture
Abstract Designing hierarchical reinforcement learning algorithms that exhibit safe behaviour is not only vital for practical applications b… (voir plus)ut also facilitates a better understanding of an agent’s decisions. We tackle this problem in the options framework (Sutton, Precup & Singh, 1999), a particular way to specify temporally abstract actions which allow an agent to use sub-policies with start and end conditions. We consider a behaviour as safe that avoids regions of state space with high uncertainty in the outcomes of actions. We propose an optimization objective that learns safe options by encouraging the agent to visit states with higher behavioural consistency. The proposed objective results in a trade-off between maximizing the standard expected return and minimizing the effect of model uncertainty in the return. We propose a policy gradient algorithm to optimize the constrained objective function. We examine the quantitative and qualitative behaviours of the proposed approach in a tabular grid world, continuous-state puddle world, and three games from the Arcade Learning Environment: Ms. Pacman, Amidar, and Q*Bert. Our approach achieves a reduction in the variance of return, boosts performance in environments with intrinsic variability in the reward structure, and compares favourably both with primitive actions and with risk-neutral options.
Learning Robust State Abstractions for Hidden-Parameter Block MDPs
Amy Zhang
Shagun Sodhani
Temporally Abstract Partial Models
Zafarali Ahmed
Gheorghe Comanici
Humans and animals have the ability to reason and make predictions about different courses of action at many time scales. In reinforcement l… (voir plus)earning, option models (Sutton, Precup \& Singh, 1999; Precup, 2000) provide the framework for this kind of temporally abstract prediction and reasoning. Natural intelligent agents are also able to focus their attention on courses of action that are relevant or feasible in a given situation, sometimes termed affordable actions. In this paper, we define a notion of affordances for options, and develop temporally abstract partial option models, that take into account the fact that an option might be affordable only in certain situations. We analyze the trade-offs between estimation and approximation error in planning and learning when using such models, and identify some interesting special cases. Additionally, we empirically demonstrate the ability to learn both affordances and partial option models online resulting in improved sample efficiency and planning time in the Taxi domain.
What can I do here? A Theory of Affordances in Reinforcement Learning
Zafarali Ahmed
Gheorghe Comanici
David Abel
Multi-Task Reinforcement Learning as a Hidden-Parameter Block MDP
Amy Zhang
Shagun Sodhani
Multi-task reinforcement learning is a rich paradigm where information from previously seen environments can be leveraged for better perform… (voir plus)ance and improved sample-efficiency in new environments. In this work, we leverage ideas of common structure underlying a family of Markov decision processes (MDPs) to improve performance in the few-shot regime. We use assumptions of structure from Hidden-Parameter MDPs and Block MDPs to propose a new framework, HiP-BMDP, and approach for learning a common representation and universal dynamics model. To this end, we provide transfer and generalization bounds based on task and state similarity, along with sample complexity bounds that depend on the aggregate number of samples across tasks, rather than the number of tasks, a significant improvement over prior work. To demonstrate the efficacy of the proposed method, we empirically compare and show improvements against other multi-task and meta-reinforcement learning baselines.
Value Preserving State-Action Abstractions
David Abel
Nathan Umbanhowar
Dilip Arumugam
Michael L. Littman
Abstraction can improve the sample efficiency of reinforcement learning. However, the process of abstraction inherently discards information… (voir plus), potentially compromising an agent’s ability to represent high-value policies. To mitigate this, we here introduce combinations of state abstractions and options that are guaranteed to preserve the representation of near-optimal policies. We first define φ-relative options, a general formalism for analyzing the value loss of options paired with a state abstraction, and present necessary and sufficient conditions for φ-relative options to preserve near-optimal behavior in any finite Markov Decision Process. We further show that, under appropriate assumptions, φ-relative options can be composed to induce hierarchical abstractions that are also guaranteed to represent high-value policies.ion can improve the sample efficiency of reinforcement learning. However, the process of abstraction inherently discards information, potentially compromising an agent’s ability to represent high-value policies. To mitigate this, we here introduce combinations of state abstractions and options that are guaranteed to preserve the representation of near-optimal policies. We first define φ-relative options, a general formalism for analyzing the value loss of options paired with a state abstraction, and present necessary and sufficient conditions for φ-relative options to preserve near-optimal behavior in any finite Markov Decision Process. We further show that, under appropriate assumptions, φ-relative options can be composed to induce hierarchical abstractions that are also guaranteed to represent high-value policies.
Value Preserving State-Action Abstractions
David Abel
Nathan Umbanhowar
Dilip Arumugam
Michael L. Littman
Abstraction can improve the sample efficiency of reinforcement learning. However, the process of abstraction inherently discards information… (voir plus), potentially compromising an agent’s ability to represent high-value policies. To mitigate this, we here introduce combinations of state abstractions and options that are guaranteed to preserve the representation of near-optimal policies. We first define φ-relative options, a general formalism for analyzing the value loss of options paired with a state abstraction, and present necessary and sufficient conditions for φ-relative options to preserve near-optimal behavior in any finite Markov Decision Process. We further show that, under appropriate assumptions, φ-relative options can be composed to induce hierarchical abstractions that are also guaranteed to represent high-value policies.ion can improve the sample efficiency of reinforcement learning. However, the process of abstraction inherently discards information, potentially compromising an agent’s ability to represent high-value policies. To mitigate this, we here introduce combinations of state abstractions and options that are guaranteed to preserve the representation of near-optimal policies. We first define φ-relative options, a general formalism for analyzing the value loss of options paired with a state abstraction, and present necessary and sufficient conditions for φ-relative options to preserve near-optimal behavior in any finite Markov Decision Process. We further show that, under appropriate assumptions, φ-relative options can be composed to induce hierarchical abstractions that are also guaranteed to represent high-value policies.
Options of Interest: Temporal Abstraction with Interest Functions
Martin Klissarov
Maxime Chevalier-Boisvert
Temporal abstraction refers to the ability of an agent to use behaviours of controllers which act for a limited, variable amount of time. Th… (voir plus)e options framework describes such behaviours as consisting of a subset of states in which they can initiate, an internal policy and a stochastic termination condition. However, much of the subsequent work on option discovery has ignored the initiation set, because of difficulty in learning it from data. We provide a generalization of initiation sets suitable for general function approximation, by defining an interest function associated with an option. We derive a gradient-based learning algorithm for interest functions, leading to a new interest-option-critic architecture. We investigate how interest functions can be leveraged to learn interpretable and reusable temporal abstractions. We demonstrate the efficacy of the proposed approach through quantitative and qualitative results, in both discrete and continuous environments.
Learning Options with Interest Functions
Learning temporal abstractions which are partial solutions to a task and could be reused for solving other tasks is an ingredient that can h… (voir plus)elp agents to plan and learn efficiently. In this work, we tackle this problem in the options framework. We aim to autonomously learn options which are specialized in different state space regions by proposing a notion of interest functions, which generalizes initiation sets from the options framework for function approximation. We build on the option-critic framework to derive policy gradient theorems for interest functions, leading to a new interest-option-critic architecture.
Environments for Lifelong Reinforcement Learning
To achieve general artificial intelligence, reinforcement learning (RL) agents should learn not only to optimize returns for one specific ta… (voir plus)sk but also to constantly build more complex skills and scaffold their knowledge about the world, without forgetting what has already been learned. In this paper, we discuss the desired characteristics of environments that can support the training and evaluation of lifelong reinforcement learning agents, review existing environments from this perspective, and propose recommendations for devising suitable environments in the future.