Portrait de Prakash Panangaden

Prakash Panangaden

Membre académique principal
Sujets de recherche
Apprentissage par renforcement
Modèles probabilistes
Raisonnement
Théorie de l'apprentissage automatique
Théorie de l'information quantique

Biographie

Prakash Panangaden a étudié la physique à l'Indian Institute of Technology de Kanpur, en Inde. Il a obtenu une maîtrise en physique de l'Université de Chicago, où il a étudié l'émission stimulée des trous noirs. Il a ensuite obtenu un doctorat en physique de l'Université du Wisconsin-Milwaukee, dans lequel il s’est penché sur la théorie quantique des champs dans un espace-temps courbe.

Il a été professeur adjoint d'informatique à l'Université Cornell, où il a principalement travaillé sur la sémantique des langages de programmation concurrents. Depuis 1990, il travaillait à l'Université McGill. Au cours des 25 dernières années, il s'est intéressé à de nombreux aspects des processus de Markov : équivalence des processus, caractérisation logique, approximation et métrique.

Récemment, il a travaillé sur l'utilisation des métriques pour améliorer l'apprentissage des représentations. Il a également publié des articles sur la physique, l'information quantique et les mathématiques pures. Il est membre de la Société royale du Canada et de l'Association for Computing Machinery (ACM).

Étudiants actuels

Maîtrise recherche - McGill
Co-superviseur⋅e :

Publications

Studying the Interplay Between the Actor and Critic Representations in Reinforcement Learning
Samuel Garcin
Trevor McInroe
Christopher G. Lucas
David Abel
Stefano V Albrecht
Extracting relevant information from a stream of high-dimensional observations is a central challenge for deep reinforcement learning agents… (voir plus). Actor-critic algorithms add further complexity to this challenge, as it is often unclear whether the same information will be relevant to both the actor and the critic. To this end, we here explore the principles that underlie effective representations for the actor and for the critic in on-policy algorithms. We focus our study on understanding whether the actor and critic will benefit from separate, rather than shared, representations. Our primary finding is that when separated, the representations for the actor and critic systematically specialise in extracting different types of information from the environment -- the actor's representation tends to focus on action-relevant information, while the critic's representation specialises in encoding value and dynamics information. We conduct a rigourous empirical study to understand how different representation learning approaches affect the actor and critic's specialisations and their downstream performance, in terms of sample efficiency and generation capabilities. Finally, we discover that a separated critic plays an important role in exploration and data collection during training. Our code, trained models and data are accessible at https://github.com/francelico/deac-rep.
Studying the Interplay Between the Actor and Critic Representations in Reinforcement Learning
Samuel Garcin
Trevor McInroe
Christopher G. Lucas
David Abel
Stefano V Albrecht
Extracting relevant information from a stream of high-dimensional observations is a central challenge for deep reinforcement learning agents… (voir plus). Actor-critic algorithms add further complexity to this challenge, as it is often unclear whether the same information will be relevant to both the actor and the critic. To this end, we here explore the principles that underlie effective representations for an actor and for a critic. We focus our study on understanding whether an actor and a critic will benefit from a decoupled, rather than shared, representation. Our primary finding is that when decoupled, the representations for the actor and critic systematically specialise in extracting different types of information from the environment---the actor's representation tends to focus on action-relevant information, while the critic's representation specialises in encoding value and dynamics information. Finally, we demonstrate how these insights help select representation learning objectives that play into the actor's and critic's respective knowledge specialisations, and improve performance in terms of agent returns.
Studying the Interplay Between the Actor and Critic Representations in Reinforcement Learning
Samuel Garcin
Trevor McInroe
Christopher G. Lucas
David Abel
Stefano V Albrecht
Optimal Approximate Minimization of One-Letter Weighted Finite Automata
Clara Lacroce
Borja Balle
Conditions on Preference Relations that Guarantee the Existence of Optimal Policies
Polynomial Lawvere Logic
Giorgio Bacci
Radu Mardare
Gordon D. Plotkin
Policy Gradient Methods in the Presence of Symmetries and State Abstractions
Sum and Tensor of Quantitative Effects
Giorgio Bacci
Radu Mardare
Gordon Plotkin
Behavioural pseudometrics for continuous-time diffusions
Linan Chen
Florence Clerc
Propositional Logics for the Lawvere Quantale
Giorgio Bacci
Radu Mardare
Gordon Plotkin
Behavioural equivalences for continuous-time Markov processes
Linan Chen
Florence Clerc
A Kernel Perspective on Behavioural Metrics for Markov Decision Processes
We present a novel perspective on behavioural metrics for Markov decision processes via the use of positive definite kernels. We define a ne… (voir plus)w metric under this lens that is provably equivalent to the recently introduced MICo distance (Castro et al., 2021). The kernel perspective enables us to provide new theoretical results, including value-function bounds and low-distortion finite-dimensional Euclidean embeddings, which are crucial when using behavioural metrics for reinforcement learning representations. We complement our theory with strong empirical results that demonstrate the effectiveness of these methods in practice.