Portrait of Christian Gagné

Christian Gagné

Associate Academic Member
Canada CIFAR AI Chair
Full Professor, Université Laval, Department of Electrical and Computer Engineering
Director of IID (Institute Intelligence and Data), Institute Intelligence and Data (IID)
Research Topics
Computer Vision
Deep Learning
Learning to Program
Medical Machine Learning
Representation Learning

Biography

Christian Gagné has been a professor in the Department of Electrical and Computer Engineering at Université Laval since 2008.

He is the director of the Institute Intelligence and Data (IID), holds a Canada CIFAR AI Chair, and is an associate member of Mila – Quebec Artificial Intelligence Institute.

Gagné is also a member of Université Laval’s Computer Vision and Systems Laboratory (LVSN), as well as its Robotics, Vision and Machine Intelligence Research Centre (CeRVIM) and its Big Data Research Centre (CRDM). He is a member of the REPARTI and UNIQUE strategic clusters of the FRQNT, the VITAM centre of the FRQS, and the International Observatory on the Societal Impacts of AI and Digital Technologies (OBVIA).

Gagné’s research focuses on the development of methods for machine learning and stochastic optimization. In particular, he is interested in deep neural networks, representation learning and transfer, meta-learning and multitasking. He is also interested in optimization approaches based on probabilistic models and evolutionary algorithms, including black-box optimization and automatic programming. An important part of his work is the practical application of these techniques in fields like computer vision, microscopy, healthcare, energy and transportation.

Current Students

PhD - Université Laval
PhD - Université Laval
Master's Research - Université Laval
PhD - Université Laval
PhD - Université Laval
PhD - Université Laval
PhD - Université Laval

Publications

A Layer Selection Approach to Test Time Adaptation
Sabyasachi Sahoo
Mostafa ElAraby
Jonas Ngnawe
Yann Batiste Pequignot
Frederic Precioso
Test Time Adaptation (TTA) addresses the problem of distribution shift by adapting a pretrained model to a new domain during inference. When… (see more) faced with challenging shifts, most methods collapse and perform worse than the original pretrained model. In this paper, we find that not all layers are equally receptive to the adaptation, and the layers with the most misaligned gradients often cause performance degradation. To address this, we propose GALA, a novel layer selection criterion to identify the most beneficial updates to perform during test time adaptation. This criterion can also filter out unreliable samples with noisy gradients. Its simplicity allows seamless integration with existing TTA loss functions, thereby preventing degradation and focusing adaptation on the most trainable layers. This approach also helps to regularize adaptation to preserve the pretrained features, which are crucial for handling unseen domains. Through extensive experiments, we demonstrate that the proposed layer selection framework improves the performance of existing TTA approaches across multiple datasets, domain shifts, model architectures, and TTA losses.
Detecting Brittle Decisions for Free: Leveraging Margin Consistency in Deep Robust Classifiers
Jonas Ngnawe
Sabyasachi Sahoo
Yann Batiste Pequignot
Frederic Precioso
Quantitative Analysis of Miniature Synaptic Calcium Transients Using Positive Unlabeled Deep Learning
Frédéric Beaupré
Anthony Bilodeau
Theresa Wiesner
Gabriel Leclerc
Mado Lemieux
Gabriel Nadeau
Katrine Castonguay
Bolin Fan
Simon Labrecque
Renée Hložek
Paul De Koninck
Flavie Lavoie-Cardinal
Ca2+ imaging methods are widely used for studying cellular activity in the brain, allowing detailed analysis of dynamic processes across var… (see more)ious scales. Enhanced by high-contrast optical microscopy and fluorescent Ca2+ sensors, this technique can be used to reveal localized Ca2+ fluctuations within neurons, including in sub-cellular compartments, such as the dendritic shaft or spines. Despite advances in Ca2+ sensors, the analysis of miniature Synaptic Calcium Transients (mSCTs), characterized by variability in morphology and low signal-to-noise ratios, remains challenging. Traditional threshold-based methods struggle with the detection and segmentation of these small, dynamic events. Deep learning (DL) approaches offer promising solutions but are limited by the need for large annotated datasets. Positive Unlabeled (PU) learning addresses this limitation by leveraging unlabeled instances to increase dataset size and enhance performance. This approach is particularly useful in the case of mSCTs that are scarce and small, associated with a very small proportion of the foreground pixels. PU learning significantly increases the effective size of the training dataset, improving model performance. Here, we present a PU learning-based strategy for detecting and segmenting mSCTs. We evaluate the performance of two 3D deep learning models, StarDist-3D and 3D U-Net, which are well established for the segmentation of small volumetric structures in microscopy datasets. By integrating PU learning, we enhance the 3D U-Net’s performance, demonstrating significant gains over traditional methods. This work pioneers the application of PU learning in Ca2+ imaging analysis, offering a robust framework for mSCT detection and segmentation. We also demonstrate how this quantitative analysis pipeline can be used for subsequent mSCTs feature analysis. We characterize morphological and kinetic changes of mSCTs associated with the application of chemical long-term potentiation (cLTP) stimulation in cultured rat hippocampal neurons. Our data-driven approach shows that a cLTP-inducing stimulus leads to the emergence of new active dendritic regions and differently affects mSCTs subtypes.
TrackPGD: A White-box Attack using Binary Masks against Robust Transformer Trackers
Fatemeh Nourilenjan Nokabadi
Yann Batiste Pequignot
Jean-Francois Lalonde
Predicting the Population Risk of Suicide Using Routinely Collected Health Administrative Data in Quebec, Canada: Model-Based Synthetic Estimation Study
JianLi Wang
Fatemeh Gholi Zadeh Kharrat
Geneviève Gariépy
Jean-François Pelletier
Victoria Massamba
Pascale Lévesque
Mada Mohammed
Alain Lesage
Detecting Brittle Decisions for Free: Leveraging Margin Consistency in Deep Robust Classifiers
Jonas Ngnaw'e
Sabyasachi Sahoo
Yann Batiste Pequignot
Fr'ed'eric Precioso
Despite extensive research on adversarial training strategies to improve robustness, the decisions of even the most robust deep learning mod… (see more)els can still be quite sensitive to imperceptible perturbations, creating serious risks when deploying them for high-stakes real-world applications. While detecting such cases may be critical, evaluating a model's vulnerability at a per-instance level using adversarial attacks is computationally too intensive and unsuitable for real-time deployment scenarios. The input space margin is the exact score to detect non-robust samples and is intractable for deep neural networks. This paper introduces the concept of margin consistency -- a property that links the input space margins and the logit margins in robust models -- for efficient detection of vulnerable samples. First, we establish that margin consistency is a necessary and sufficient condition to use a model's logit margin as a score for identifying non-robust samples. Next, through comprehensive empirical analysis of various robustly trained models on CIFAR10 and CIFAR100 datasets, we show that they indicate strong margin consistency with a strong correlation between their input space margins and the logit margins. Then, we show that we can effectively use the logit margin to confidently detect brittle decisions with such models and accurately estimate robust accuracy on an arbitrarily large test set by estimating the input margins only on a small subset. Finally, we address cases where the model is not sufficiently margin-consistent by learning a pseudo-margin from the feature representation. Our findings highlight the potential of leveraging deep representations to efficiently assess adversarial vulnerability in deployment scenarios.
Detecting Brittle Decisions for Free: Leveraging Margin Consistency in Deep Robust Classifiers
Jonas Ngnaw'e
Sabyasachi Sahoo
Yann Pequignot
Frederic Precioso
Reproducibility Study on Adversarial Attacks Against Robust Transformer Trackers
Fatemeh Nourilenjan Nokabadi
Jean-Francois Lalonde
Layerwise Early Stopping for Test Time Adaptation
Sabyasachi Sahoo
Mostafa ElAraby
Jonas Ngnawe
Yann Batiste Pequignot
Frederic Precioso
Explainable artificial intelligence models for predicting risk of suicide using health administrative data in Quebec
Fatemeh Gholi Zadeh Kharrat
Alain Lesage
Geneviève Gariépy
Jean-François Pelletier
Camille Brousseau-Paradis
Louis Rochette
Eric Pelletier
Pascale Lévesque
Mada Mohammed
JianLi Wang
Suicide is a complex, multidimensional event, and a significant challenge for prevention globally. Artificial intelligence (AI) and machine … (see more)learning (ML) have emerged to harness large-scale datasets to enhance risk detection. In order to trust and act upon the predictions made with ML, more intuitive user interfaces must be validated. Thus, Interpretable AI is one of the crucial directions which could allow policy and decision makers to make reasonable and data-driven decisions that can ultimately lead to better mental health services planning and suicide prevention. This research aimed to develop sex-specific ML models for predicting the population risk of suicide and to interpret the models. Data were from the Quebec Integrated Chronic Disease Surveillance System (QICDSS), covering up to 98% of the population in the province of Quebec and containing data for over 20,000 suicides between 2002 and 2019. We employed a case-control study design. Individuals were considered cases if they were aged 15+ and had died from suicide between January 1st, 2002, and December 31st, 2019 (n = 18339). Controls were a random sample of 1% of the Quebec population aged 15+ of each year, who were alive on December 31st of each year, from 2002 to 2019 (n = 1,307,370). We included 103 features, including individual, programmatic, systemic, and community factors, measured up to five years prior to the suicide events. We trained and then validated the sex-specific predictive risk model using supervised ML algorithms, including Logistic Regression (LR), Random Forest (RF), Extreme Gradient Boosting (XGBoost) and Multilayer perceptron (MLP). We computed operating characteristics, including sensitivity, specificity, and Positive Predictive Value (PPV). We then generated receiver operating characteristic (ROC) curves to predict suicides and calibration measures. For interpretability, Shapley Additive Explanations (SHAP) was used with the global explanation to determine how much the input features contribute to the models’ output and the largest absolute coefficients. The best sensitivity was 0.38 with logistic regression for males and 0.47 with MLP for females; the XGBoost Classifier with 0.25 for males and 0.19 for females had the best precision (PPV). This study demonstrated the useful potential of explainable AI models as tools for decision-making and population-level suicide prevention actions. The ML models included individual, programmatic, systemic, and community levels variables available routinely to decision makers and planners in a public managed care system. Caution shall be exercised in the interpretation of variables associated in a predictive model since they are not causal, and other designs are required to establish the value of individual treatments. The next steps are to produce an intuitive user interface for decision makers, planners and other stakeholders like clinicians or representatives of families and people with live experience of suicidal behaviors or death by suicide. For example, how variations in the quality of local area primary care programs for depression or substance use disorders or increased in regional mental health and addiction budgets would lower suicide rates.
Generalizing across Temporal Domains with Koopman Operators
Qiuhao Zeng
Wei Wang
Fan Zhou
Gezheng Xu
Ruizhi Pu
Changjian Shui
Shichun Yang
Boyu Wang
Charles Ling
Analyzing Data Augmentation for Medical Images: A Case Study in Ultrasound Images
Adam Tupper
Data augmentation is one of the most effective techniques to improve the generalization performance of deep neural networks. Yet, despite of… (see more)ten facing limited data availability in medical image analysis, it is frequently underutilized. This appears to be due to a gap in our collective understanding of the efficacy of different augmentation techniques across medical imaging tasks and modalities. One domain where this is especially true is breast ultrasound images. This work addresses this issue by analyzing the effectiveness of different augmentation techniques for the classification of breast lesions in ultrasound images. We assess the generalizability of our findings across several datasets, demonstrate that certain augmentations are far more effective than others, and show that their usage leads to significant performance gains.