Portrait de Chris Pal

Chris Pal

Membre académique principal
Chaire en IA Canada-CIFAR
Professeur titulaire, Polytechnique Montréal, Département de génie informatique et de génie logiciel
Professeur associé, Université de Montréal, Département d'informatique et de recherche opérationnelle
Sujets de recherche
Apprentissage profond

Biographie

Christopher Pal est titulaire d'une chaire en IA Canada-CIFAR, professeur titulaire à Polytechnique Montréal et professeur adjoint au Département d'informatique et de recherche opérationnelle (DIRO) de l'Université de Montréal. Il est également chercheur émérite à ServiceNow Research. Il est engagé dans la recherche sur l'intelligence artificielle et l'apprentissage automatique depuis plus de 25 ans, publiant souvent des travaux sur les méthodes de modélisation du langage à grande échelle et les techniques de modélisation générative. Il a obtenu un doctorat en informatique à l'Université de Waterloo.

Étudiants actuels

Stagiaire de recherche - McGill
Postdoctorat - HEC
Superviseur⋅e principal⋅e :
Collaborateur·rice de recherche - McGill
Superviseur⋅e principal⋅e :
Maîtrise recherche - UdeM
Doctorat - Polytechnique
Doctorat - McGill
Superviseur⋅e principal⋅e :
Doctorat - UdeM
Superviseur⋅e principal⋅e :
Doctorat - Polytechnique
Maîtrise recherche - UdeM
Co-superviseur⋅e :
Collaborateur·rice alumni - Polytechnique
Doctorat - Polytechnique
Postdoctorat - McGill
Co-superviseur⋅e :
Maîtrise recherche - Polytechnique
Doctorat - UdeM
Co-superviseur⋅e :
Maîtrise recherche - Concordia
Co-superviseur⋅e :
Collaborateur·rice de recherche - UdeM
Maîtrise recherche - UdeM
Doctorat - UdeM
Doctorat - Polytechnique
Doctorat - Polytechnique
Doctorat - École de technologie suprérieure
Doctorat - UdeM
Superviseur⋅e principal⋅e :
Postdoctorat - HEC
Superviseur⋅e principal⋅e :
Doctorat - Polytechnique
Superviseur⋅e principal⋅e :
Doctorat - McGill
Superviseur⋅e principal⋅e :
Doctorat - Polytechnique

Publications

Implicit Offline Reinforcement Learning via Supervised Learning
Alexandre Piché
Rafael Pardinas
David Vazquez
Igor Mordatch
Offline Reinforcement Learning (RL) via Supervised Learning is a simple and effective way to learn robotic skills from a dataset of varied b… (voir plus)ehaviors. It is as simple as supervised learning and Behavior Cloning (BC) but takes advantage of the return information. On BC tasks, implicit models have been shown to match or outperform explicit ones. Despite the benefits of using implicit models to learn robotic skills via BC, Offline RL via Supervised Learning algorithms have been limited to explicit models. We show how implicit models leverage return information and match or outperform explicit algorithms to acquire robotic skills from fixed datasets. Furthermore, we show how closely related our implicit methods are to other popular RL via Supervised Learning algorithms.
Score-based Denoising Diffusion with Non-Isotropic Gaussian Noise Models
Vikram Voleti
Generative models based on denoising diffusion techniques have led to an unprecedented increase in the quality and diversity of imagery that… (voir plus) is now possible to create with neural generative models. However, most contemporary state-of-the-art methods are derived from a standard isotropic Gaussian formulation. In this work we examine the situation where non-isotropic Gaussian distributions are used. We present the key mathematical derivations for creating denoising diffusion models using an underlying non-isotropic Gaussian noise model. We also provide initial experiments with the CIFAR10 dataset to help verify empirically that this more general modelling approach can also yield high-quality samples.
Improving Meta-Learning Generalization with Activation-Based Early-Stopping
Simon Guiroy
Goncalo Mordido
Receptive Field Refinement for Convolutional Neural Networks Reliably Improves Predictive Performance
Mats Leon Richter
Minimal changes to neural architectures (e.g. changing a single hyperparameter in a key layer), can lead to significant gains in predictive … (voir plus)performance in Convolutional Neural Networks (CNNs). In this work, we present a new approach to receptive field analysis that can yield these types of theoretical and empirical performance gains across twenty well-known CNN architectures examined in our experiments. By further developing and formalizing the analysis of receptive field expansion in convolutional neural networks, we can predict unproductive layers in an automated manner before ever training a model. This allows us to optimize the parameter-efficiency of a given architecture at low cost. Our method is computationally simple and can be done in an automated manner or even manually with minimal effort for most common architectures. We demonstrate the effectiveness of this approach by increasing parameter efficiency across past and current top-performing CNN-architectures. Specifically, our approach is able to improve ImageNet1K performance across a wide range of well-known, state-of-the-art (SOTA) model classes, including: VGG Nets, MobileNetV1, MobileNetV3, NASNet A (mobile), MnasNet, EfficientNet, and ConvNeXt - leading to a new SOTA result for each model class.
SMPL-IK: Learned Morphology-Aware Inverse Kinematics for AI Driven Artistic Workflows
Vikram Voleti
Boris Oreshkin
Florent Bocquelet
Félix Harvey
Louis-Simon Ménard
Does Entity Abstraction Help Generative Transformers Reason?
Nicolas Gontier
We study the utility of incorporating entity type abstractions into pre-trained Transformers and test these methods on four NLP tasks requir… (voir plus)ing different forms of logical reasoning: (1) compositional language understanding with text-based relational reasoning (CLUTRR), (2) abductive reasoning (ProofWriter), (3) multi-hop question answering (HotpotQA), and (4) conversational question answering (CoQA). We propose and empirically explore three ways to add such abstraction: (i) as additional input embeddings, (ii) as a separate sequence to encode, and (iii) as an auxiliary prediction task for the model. Overall, our analysis demonstrates that models with abstract entity knowledge performs better than without it. The best abstraction aware models achieved an overall accuracy of 88.8% and 91.8% compared to the baseline model achieving 62.9% and 89.8% on CLUTRR and ProofWriter respectively. However, for HotpotQA and CoQA, we find that F1 scores improve by only 0.5% on average. Our results suggest that the benefit of explicit abstraction is significant in formally defined logical reasoning settings requiring many reasoning hops, but point to the notion that it is less beneficial for NLP tasks having less formal logical structure.
The Liver Tumor Segmentation Benchmark (LiTS)
Patrick Bilic
Patrick Christ
Eugene Vorontsov
Hongwei Bran Li
Grzegorz Chlebus
Hao Chen
Qi Dou
Chi-Wing Fu
Xu Han
Gabriel Efrain Humpire Mamani
Pheng Ann Heng
Jürgen Hesser
Samuel Kadoury
Julian Walter Holch
Tomasz Konopczynski
Miao Yue
Chunming Li
X. Li
Jana Lipková
John Lowengrub … (voir 99 de plus)
Michal Marianne Amitai
Hans Meine
J. Moltz
Marie Piraud
Ivan Ezhov
Xiaojuan Qi
Fernando Navarro
Jin Qi
Florian Kofler
Markus Rempfler
Johannes C. Paetzold
Karsten Roth
Suprosanna Shit
Andrea Schenk
Xiaobin Hu
Anjany Sekuboyina
Ping Zhou
Christian Hülsemeyer
Marcel Beetz
Jan Kirschke
Florian Ettlinger
Felix Gruen
Benedikt Wiestler
Zhiheng Zhang
Georgios Kaissis
Fabian Lohöfer
Rickmer Braren
J. Holch
Michela Antonelli
Felix Hofmann
Woong Bae
Wieland Sommer
Míriam Bellver
Volker Heinemann
Lei Bi
Colin Jacobs
G. Mamani
Bram van Ginneken
Erik B. Dam
Gabriel Chartrand
An Tang
Michal Drozdzal
Bogdan Georgescu
Avi Ben-Cohen
Xavier Giró-i-Nieto
Eyal Klang
M. Amitai
E. Konen
Hayit Greenspan
Johan Moreau
Jan Hendrik Moltz
Alexandre Hostettler
Christian Igel
Luc Soler
Fabian Isensee
Refael Vivanti
Paul Jäger
Adi Szeskin
Fucang Jia
Naama Lev-Cohain
Krishna Chaitanya Kaluva
Jacob Sosna
Mahendra Khened
Leo Joskowicz
Ildoo Kim
Bjoern Menze
Jae-Hun Kim
Zengming Shen
Sungwoong Kim
Simon Kohl
Avinash Kori
Ganapathy Krishnamurthi
Fan Li
Hongchao Li
Junbo Li
Xiaomeng Li
Jun Ma
Klaus Maier-Hein
Kevis-Kokitsi Maninis
Dorit Merhof
Akshay Pai
Mathias Perslev
Jens Petersen
Jordi Pont-Tuset
Oliver Rippel
Ignacio Sarasua
Jordi Torres
Christian Wachinger
Chunliang Wang
Leon Weninger
Jianrong Wu
Daguang Xu
Xiaoping Yang
Simon Chun-Ho Yu
Yading Yuan
Liping Zhang
Jorge Cardoso
Spyridon Bakas
A General-Purpose Neural Architecture for Geospatial Systems
Nasim Rahaman
Martin Weiss
Frederik Träuble
Francesco Locatello
Alexandre Lacoste
Li Erran Li
Bernhard Schölkopf
Using Graph Algorithms to Pretrain Graph Completion Transformers
Jonathan Pilault
Mikhail Galkin
Bahare Fatemi
Perouz Taslakian
David Vasquez
Recent work on Graph Neural Networks has demonstrated that self-supervised pretraining can further enhance performance on downstream graph, … (voir plus)link, and node classification tasks. However, the efficacy of pretraining tasks has not been fully investigated for downstream large knowledge graph completion tasks. Using a contextualized knowledge graph embedding approach, we investigate five different pretraining signals, constructed using several graph algorithms and no external data, as well as their combination. We leverage the versatility of our Transformer-based model to explore graph structure generation pretraining tasks (i.e. path and k-hop neighborhood generation), typically inapplicable to most graph embedding methods. We further propose a new path-finding algorithm guided by information gain and find that it is the best-performing pretraining task across three downstream knowledge graph completion datasets. While using our new path-finding algorithm as a pretraining signal provides 2-3% MRR improvements, we show that pretraining on all signals together gives the best knowledge graph completion results. In a multitask setting that combines all pretraining tasks, our method surpasses the latest and strong performing knowledge graph embedding methods on all metrics for FB15K-237, on MRR and Hit@1 for WN18RRand on MRR and hit@10 for JF17K (a knowledge hypergraph dataset).
Direct Behavior Specification via Constrained Reinforcement Learning
Julien Roy
Roger Girgis
Joshua Romoff
Chris J Pal
The standard formulation of Reinforcement Learning lacks a practical way of specifying what are admissible and forbidden behaviors. Most oft… (voir plus)en, practitioners go about the task of behavior specification by manually engineering the reward function, a counter-intuitive process that requires several iterations and is prone to reward hacking by the agent. In this work, we argue that constrained RL, which has almost exclusively been used for safe RL, also has the potential to significantly reduce the amount of work spent for reward specification in applied RL projects. To this end, we propose to specify behavioral preferences in the CMDP framework and to use Lagrangian methods to automatically weigh each of these behavioral constraints. Specifically, we investigate how CMDPs can be adapted to solve goal-based tasks while adhering to several constraints simultaneously. We evaluate this framework on a set of continuous control tasks relevant to the application of Reinforcement Learning for NPC design in video games.
A Probabilistic Perspective on Reinforcement Learning via Supervised Learning
Alexandre Piché
Rafael Pardinas
David Vazquez
Learning to Guide and to Be Guided in the Architect-Builder Problem
Paul Barde
Tristan Karch
Clément Moulin-Frier
Pierre-Yves Oudeyer
We are interested in interactive agents that learn to coordinate, namely, a …