Portrait of Christian Gagné

Christian Gagné

Associate Academic Member
Canada CIFAR AI Chair
Full Professor, Université Laval, Department of Electrical and Computer Engineering
Director of IID (Institute Intelligence and Data), Institute Intelligence and Data (IID)
Research Topics
Computer Vision
Deep Learning
Learning to Program
Medical Machine Learning
Representation Learning

Biography

Christian Gagné has been a professor in the Department of Electrical and Computer Engineering at Université Laval since 2008.

He is the director of the Institute Intelligence and Data (IID), holds a Canada CIFAR AI Chair, and is an associate member of Mila – Quebec Artificial Intelligence Institute.

Gagné is also a member of Université Laval’s Computer Vision and Systems Laboratory (LVSN), as well as its Robotics, Vision and Machine Intelligence Research Centre (CeRVIM) and its Big Data Research Centre (CRDM). He is a member of the REPARTI and UNIQUE strategic clusters of the FRQNT, the VITAM centre of the FRQS, and the International Observatory on the Societal Impacts of AI and Digital Technologies (OBVIA).

Gagné’s research focuses on the development of methods for machine learning and stochastic optimization. In particular, he is interested in deep neural networks, representation learning and transfer, meta-learning and multitasking. He is also interested in optimization approaches based on probabilistic models and evolutionary algorithms, including black-box optimization and automatic programming. An important part of his work is the practical application of these techniques in fields like computer vision, microscopy, healthcare, energy and transportation.

Current Students

PhD - Université Laval
PhD - Université Laval
Master's Research - Université Laval
Master's Research - Université Laval
PhD - Université Laval
PhD - Université Laval
Research Intern - Université Laval
PhD - Université Laval
PhD - Université Laval

Publications

Lifelong Online Learning from Accumulated Knowledge
Changjian Shui
William Wang
Ihsen Hedhli
Chi Man Wong
Feng Wan
Boyu Wang
In this article, we formulate lifelong learning as an online transfer learning procedure over consecutive tasks, where learning a given task… (see more) depends on the accumulated knowledge. We propose a novel theoretical principled framework, lifelong online learning, where the learning process for each task is in an incremental manner. Specifically, our framework is composed of two-level predictions: the prediction information that is solely from the current task; and the prediction from the knowledge base by previous tasks. Moreover, this article tackled several fundamental challenges: arbitrary or even non-stationary task generation process, an unknown number of instances in each task, and constructing an efficient accumulated knowledge base. Notably, we provide a provable bound of the proposed algorithm, which offers insights on the how the accumulated knowledge improves the predictions. Finally, empirical evaluations on both synthetic and real datasets validate the effectiveness of the proposed algorithm.
The 5-year longitudinal diagnostic profile and health services utilization of patients treated with electroconvulsive therapy in Quebec: a population-based study
Simon Lafrenière
Fatemeh Gholi-Zadeh-Kharrat
Caroline Sirois
Victoria Massamba
Louis Rochette
Camille Brousseau-Paradis
Simon Patry
Morgane Lemasson
Geneviève Gariépy
Chantal Mérette
Elham Rahme
Alain Lesage
A novel domain adaptation theory with Jensen-Shannon divergence
Changjian Shui
Qi CHEN
Jun Wen
Fan Zhou
Boyu Wang
Fair Representation Learning through Implicit Path Alignment
Changjian Shui
Qi CHEN
Jiaqi Li
Boyu Wang
Matching Feature Sets for Few-Shot Image Classification
Arman Afrasiyabi
Jean‐François Lalonde
In image classification, it is common practice to train deep networks to extract a single feature vector per input image. Few-shot classific… (see more)ation methods also mostly follow this trend. In this work, we depart from this established direction and instead propose to extract sets of feature vectors for each image. We argue that a set-based representation intrinsically builds a richer representation of images from the base classes, which can subsequently better transfer to the few-shot classes. To do so, we propose to adapt existing feature extractors to instead produce sets of feature vectors from images. Our approach, dubbed SetFeat, embeds shallow self-attention mechanisms inside existing encoder architectures. The attention modules are lightweight, and as such our method results in encoders that have approximately the same number of parameters as their original versions. During training and inference, a set-to-set matching metric is used to perform image classification. The effectiveness of our proposed architecture and metrics is demonstrated via thorough experiments on standard few-shot datasets-namely miniImageNet, tieredImageNet, and CUB-in both the 1- and 5-shot scenarios. In all cases but one, our method outperforms the state-of-the-art.
Evolving Domain Generalization
Wei Wang
Gezheng Xu
Ruizhi Pu
Jiaqi Li
Fan Zhou
Changjian Shui
Charles Ling
Boyu Wang
Deep learning of chest X-rays can predict mechanical ventilation outcome in ICU-admitted COVID-19 patients
Daniel Gourdeau
Olivier Potvin
Jason Henry Biem
Lyna Abrougui
Patrick Archambault
Carl Chartrand-Lefebvre
Louis Dieumegarde
Louis Gagnon
Raphaelle Giguère
Alexandre Hains
Huy Le
Simon Lemieux
Marie-Hélène Lévesque
Simon Nepveu
Lorne Rosenbloom
An Tang
Issac Yang
Nathalie Duchesne … (see 1 more)
Simon Duchesne
Tracking and predicting COVID-19 radiological trajectory on chest X-rays using deep learning
Daniel Gourdeau
Olivier Potvin
Patrick Archambault
Carl Chartrand‐lefebvre
Louis Dieumegarde
Reza Forghani
Alexandre Hains
David Hornstein
Huy Khiem Le
Simon Lemieux
Marie‐hélène Lévesque
Diego R. Martin
Lorne Rosenbloom
An Tang
Fabrizio Vecchio
Issac Y Yang
N. Duchesne
Simon Duchesne
TRACKING AND PREDICTING COVID-19 RADIOLOGICAL TRAJECTORY USING DEEP LEARNING ON CHEST X-RAYS: INITIAL ACCURACY TESTING
Simon Duchesne
Olivier Potvin
Daniel Gourdeau
Patrick Archambault
Carl Chartrand-Lefebvre
Louis Dieumegarde
Reza Forghani
Alexandre Hains
David Hornstein
Huy Le
Simon Lemieux
Marie-Hélène Lévesque
Diego Martin
Lorne Rosenbloom
An Tang
Fabrizio Vecchio
Issac Yang
Nathalie Duchesne
Tracking and predicting COVID-19 radiological trajectory on chest X-rays using deep learning
Daniel Gourdeau
Olivier Potvin
Patrick Archambault
Carl Chartrand-Lefebvre
Louis Dieumegarde
Reza Forghani
Alexandre Hains
David Hornstein
Huy Le
Simon Lemieux
Marie-Hélène Lévesque
Diego Martin
Lorne Rosenbloom
An Tang
Fabrizio Vecchio
Issac Yang
Nathalie Duchesne
Simon Duchesne
Tracking and predicting COVID-19 radiological trajectory on chest X-rays using deep learning
Daniel Gourdeau
Olivier Potvin
Patrick Archambault
Carl Chartrand-Lefebvre
Louis Dieumegarde
Reza Forghani
Alexandre Hains
David Hornstein
Huy Le
Simon Lemieux
Marie-Hélène Lévesque
Diego Martin
Lorne Rosenbloom
An Tang
Fabrizio Vecchio
Issac Yang
Nathalie Duchesne
Simon Duchesne
Radiological findings on chest X-ray (CXR) have shown to be essential for the proper management of COVID-19 patients as the maximum severity… (see more) over the course of the disease is closely linked to the outcome. As such, evaluation of future severity from current CXR would be highly desirable. We trained a repurposed deep learning algorithm on the CheXnet open dataset (224,316 chest X-ray images of 65,240 unique patients) to extract features that mapped to radiological labels. We collected CXRs of COVID-19-positive patients from an open-source dataset (COVID-19 image data collection) and from a multi-institutional local ICU dataset. The data was grouped into pairs of sequential CXRs and were categorized into three categories: ‘Worse’, ‘Stable’, or ‘Improved’ on the basis of radiological evolution ascertained from images and reports. Classical machine-learning algorithms were trained on the deep learning extracted features to perform immediate severity evaluation and prediction of future radiological trajectory. Receiver operating characteristic analyses and Mann-Whitney tests were performed. Deep learning predictions between “Worse” and “Improved” outcome categories and for severity stratification were significantly different for three radiological signs and one diagnostic (‘Consolidation’, ‘Lung Lesion’, ‘Pleural effusion’ and ‘Pneumonia’; all P 0.05). Features from the first CXR of each pair could correctly predict the outcome category between ‘Worse’ and ‘Improved’ cases with a 0.81 (0.74–0.83 95% CI) AUC in the open-access dataset and with a 0.66 (0.67–0.64 95% CI) AUC in the ICU dataset. Features extracted from the CXR could predict disease severity with a 52.3% accuracy in a 4-way classification. Severity evaluation trained on the COVID-19 image data collection had good out-of-distribution generalization when testing on the local dataset, with 81.6% of intubated ICU patients being classified as critically ill, and the predicted severity was correlated with the clinical outcome with a 0.639 AUC. CXR deep learning features show promise for classifying disease severity and trajectory. Once validated in studies incorporating clinical data and with larger sample sizes, this information may be considered to inform triage decisions.
Matching Feature Sets for Few-Shot Image Classification
Arman Afrasiyabi
Jean‐François Lalonde
In image classification, it is common practice to train deep networks to extract a single feature vector per input image. Few-shot classific… (see more)ation methods also mostly follow this trend. In this work, we depart from this established direction and instead propose to extract sets of feature vectors for each image. We argue that a set-based representation intrinsically builds a richer representation of images from the base classes, which can subsequently better transfer to the few-shot classes. To do so, we propose to adapt existing feature extractors to instead produce sets of feature vectors from images. Our approach, dubbed SetFeat, embeds shallow self-attention mechanisms inside existing encoder architectures. The attention modules are lightweight, and as such our method results in encoders that have approximately the same number of parameters as their original versions. During training and inference, a set-to-set matching metric is used to perform image classification. The effectiveness of our proposed architecture and metrics is demonstrated via thorough experiments on standard few-shot datasets-namely miniImageNet, tieredImageNet, and CUB-in both the 1- and 5-shot scenarios. In all cases but one, our method outperforms the state-of-the-art.