Portrait de Joelle Pineau

Joelle Pineau

Membre académique principal
Chaire en IA Canada-CIFAR
Professeure agrégée, McGill University, École d'informatique
Co-directrice générale, Meta AI (FAIR - Facebook AI Research)
Sujets de recherche
Apprentissage automatique médical
Apprentissage par renforcement
Traitement du langage naturel

Biographie

Joelle Pineau est professeure agrégée et titulaire d’une bourse William Dawson à l'Université McGill, où elle codirige le Laboratoire de raisonnement et d'apprentissage. Elle est membre du corps professoral de Mila – Institut québécois d’intelligence artificielle et titulaire d'une chaire en IA Canada-CIFAR. Elle est également vice-présidente de la recherche en IA chez Meta (anciennement Facebook), où elle dirige l'équipe FAIR (Fundamental AI Research). Elle détient un baccalauréat ès sciences en génie de l'Université de Waterloo et une maîtrise et un doctorat en robotique de l'Université Carnegie Mellon.

Ses recherches sont axées sur le développement de nouveaux modèles et algorithmes pour la planification et l'apprentissage dans des domaines complexes partiellement observables. Elle travaille également sur l'application de ces algorithmes à des problèmes complexes en robotique, dans les soins de santé, dans les jeux et dans les agents conversationnels. Elle est membre du comité de rédaction du Journal of Artificial Intelligence Research et du Journal of Machine Learning Research, et est actuellement présidente de l'International Machine Learning Society. Elle a été lauréate de la bourse commémorative E. W. R. Steacie du Conseil de recherches en sciences naturelles et en génie (CRSNG) 2018 et du Prix du Gouverneur général pour l'innovation 2019. Elle est membre de l'Association pour l'avancement de l'intelligence artificielle (AAAI), membre principal de l'Institut canadien de recherches avancées (CIFAR) et membre de la Société royale du Canada.

Étudiants actuels

Doctorat - UdeM
Superviseur⋅e principal⋅e :
Doctorat - McGill
Co-superviseur⋅e :
Doctorat - McGill

Publications

On the Societal Impact of Open Foundation Models
Sayash Kapoor
Rishi Bommasani
Kevin Klyman
Shayne Longpre
Ashwin Ramaswami
Peter Cihon
Aspen Hopkins
Kevin Bankston
Stella Biderman
Miranda Bogen
Rumman Chowdhury
Alex Engler
Peter Henderson
Yacine Jernite
Seth Lazar
Stefano Maffulli
Alondra Nelson
Aviya Skowron
Dawn Song … (voir 5 de plus)
Victor Storchan
Daniel Zhang
Daniel E. Ho
Percy Liang
Arvind Narayanan
Questions Are All You Need to Train a Dense Passage Retriever
Devendra Singh Sachan
Mike Lewis
Dani Yogatama
Luke Zettlemoyer
Manzil Zaheer
We introduce ART, a new corpus-level autoencoding approach for training dense retrieval models that does not require any labeled training da… (voir plus)ta. Dense retrieval is a central challenge for open-domain tasks, such as Open QA, where state-of-the-art methods typically require large supervised datasets with custom hard-negative mining and denoising of positive examples. ART, in contrast, only requires access to unpaired inputs and outputs (e.g., questions and potential answer passages). It uses a new passage-retrieval autoencoding scheme, where (1) an input question is used to retrieve a set of evidence passages, and (2) the passages are then used to compute the probability of reconstructing the original question. Training for retrieval based on question reconstruction enables effective unsupervised learning of both passage and question encoders, which can be later incorporated into complete Open QA systems without any further finetuning. Extensive experiments demonstrate that ART obtains state-of-the-art results on multiple QA retrieval benchmarks with only generic initialization from a pre-trained language model, removing the need for labeled data and task-specific losses.1 Our code and model checkpoints are available at: https://github.com/DevSinghSachan/art.
Group Fairness in Reinforcement Learning
Harsh Satija
Alessandro Lazaric
Matteo Pirotta
We pose and study the problem of satisfying fairness in the online Reinforcement Learning (RL) setting. We focus on the group notions of fai… (voir plus)rness, according to which agents belonging to different groups should have similar performance based on some given measure. We consider the setting of maximizing return in an unknown environment (unknown transition and reward function) and show that it is possible to have RL algorithms that learn the best fair policies without violating the fairness requirements at any point in time during the learning process. In the tabular finite-horizon episodic setting, we provide an algorithm that combines the principle of optimism and pessimism under uncertainty to achieve zero fairness violation with arbitrarily high probability while also maintaining sub-linear regret guarantees. For the high-dimensional Deep-RL setting, we present algorithms based on the performance-difference style approximate policy improvement update step and we report encouraging empirical results on various traditional RL-inspired benchmarks showing that our algorithms display the desired behavior of learning the optimal policy while performing a fair learning process.
Estimating causal effects with optimization-based methods: A review and empirical comparison
Martin Cousineau
Vedat Verter
Susan A. Murphy
Publisher Correction: Advancing ethics review practices in AI research
Madhulika Srikumar
Rebecca Finlay
Grace M. Abuhamad
Carolyn Ashurst
Rosie Campbell
Emily Campbell-Ratcliffe
Hudson Hongo
Sara Rene Jordan
Joseph Lindley
Aviv Ovadya
Improving Passage Retrieval with Zero-Shot Question Generation
Devendra Singh Sachan
Mike Lewis
Mandar Joshi
Armen Aghajanyan
Wen-tau Yih
Luke Zettlemoyer
Low-Rank Representation of Reinforcement Learning Policies
We propose a general framework for policy representation for reinforcement learning tasks. This framework involves finding a low-dimensional… (voir plus) embedding of the policy on a reproducing kernel Hilbert space (RKHS). The usage of RKHS based methods allows us to derive strong theoretical guarantees on the expected return of the reconstructed policy. Such guarantees are typically lacking in black-box models, but are very desirable in tasks requiring stability and convergence guarantees. We conduct several experiments on classic RL domains. The results confirm that the policies can be robustly represented in a low-dimensional space while the embedded policy incurs almost no decrease in returns.
SPeCiaL: Self-Supervised Pretraining for Continual Learning
Lucas Caccia
A Generalized Bootstrap Target for Value-Learning, Efficiently Combining Value and Feature Predictions
Biomedical Research and Informatics Living Laboratory for Innovative Advances of New Technologies in Community Mobility Rehabilitation: Protocol for Evaluation and Rehabilitation of Mobility Across Continuums of Care
Sara Ahmed
Philippe Archambault
Claudine Auger
Joyce Fung
Eva Kehayia
Anouk Lamontagne
Annette Majnemer
Sylvie Nadeau
Alain Ptito
Bonnie Swaine
Background Rapid advances in technologies over the past 10 years have enabled large-scale biomedical and psychosocial rehabilitation researc… (voir plus)h to improve the function and social integration of persons with physical impairments across the lifespan. The Biomedical Research and Informatics Living Laboratory for Innovative Advances of New Technologies (BRILLIANT) in community mobility rehabilitation aims to generate evidence-based research to improve rehabilitation for individuals with acquired brain injury (ABI). Objective This study aims to (1) identify the factors limiting or enhancing mobility in real-world community environments (public spaces, including the mall, home, and outdoors) and understand their complex interplay in individuals of all ages with ABI and (2) customize community environment mobility training by identifying, on a continuous basis, the specific rehabilitation strategies and interventions that patient subgroups benefit from most. Here, we present the research and technology plan for the BRILLIANT initiative. Methods A cohort of individuals, adults and children, with ABI (N=1500) will be recruited. Patients will be recruited from the acute care and rehabilitation partner centers within 4 health regions (living labs) and followed throughout the continuum of rehabilitation. Participants will also be recruited from the community. Biomedical, clinician-reported, patient-reported, and brain imaging data will be collected. Theme 1 will implement and evaluate the feasibility of collecting data across BRILLIANT living labs and conduct predictive analyses and artificial intelligence (AI) to identify mobility subgroups. Theme 2 will implement, evaluate, and identify community mobility interventions that optimize outcomes for mobility subgroups of patients with ABI. Results The biomedical infrastructure and equipment have been established across the living labs, and development of the clinician- and patient-reported outcome digital solutions is underway. Recruitment is expected to begin in May 2022. Conclusions The program will develop and deploy a comprehensive clinical and community-based mobility-monitoring system to evaluate the factors that result in poor mobility, and develop personalized mobility interventions that are optimized for specific patient subgroups. Technology solutions will be designed to support clinicians and patients to deliver cost-effective care and the right intervention to the right person at the right time to optimize long-term functional potential and meaningful participation in the community. International Registered Report Identifier (IRRID) PRR1-10.2196/12506
Block Contextual MDPs for Continual Learning
Shagun Sodhani
Franziska Meier
Amy Zhang
In reinforcement learning (RL), when defining a Markov Decision Process (MDP), the environment dynamics is implicitly assumed to be stationa… (voir plus)ry. This assumption of stationarity, while simplifying, can be unrealistic in many scenarios. In the continual reinforcement learning scenario, the sequence of tasks is another source of nonstationarity. In this work, we propose to examine this continual reinforcement learning setting through the Block Contextual MDP (BC-MDP) framework, which enables us to relax the assumption of stationarity. This framework challenges RL algorithms to handle both nonstationarity and rich observation settings and, by additionally leveraging smoothness properties, enables us to study generalization bounds for this setting. Finally, we take inspiration from adaptive control to propose a novel algorithm that addresses the challenges introduced by this more realistic BC-MDP setting, allows for zero-shot adaptation at evaluation time, and achieves strong performance on several nonstationary environments.
Improving Passage Retrieval with Zero-Shot Question Generation
Devendra Singh Sachan
Mike Lewis
Mandar S. Joshi
Armen Aghajanyan
Wen-292 Tau Yih
Luke Zettlemoyer
We propose a simple and effective re-ranking method for improving passage retrieval in open question answering. The re-ranker re-scores retr… (voir plus)ieved passages with a zero-shot question generation model, which uses a pre-trained language model to compute the probability of the input question conditioned on a retrieved passage. This approach can be applied on top of any retrieval method (e.g. neural or keyword-based), does not require any domain- or task-specific training (and therefore is expected to generalize better to data distribution shifts), and provides rich cross-attention between query and passage (i.e. it must explain every token in the question). When evaluated on a number of open-domain retrieval datasets, our re-ranker improves strong unsupervised retrieval models by 6%-18% absolute and strong supervised models by up to 12% in terms of top-20 passage retrieval accuracy. We also obtain new state-of-the-art results on full open-domain question answering by simply adding the new re-ranker to existing models with no further changes.