Portrait de Guy Wolf

Guy Wolf

Membre académique principal
Chaire en IA Canada-CIFAR
Professeur agrégé, Université de Montréal, Département de mathématiques et statistiques
Concordia University
CHUM - Montreal University Hospital Center
Sujets de recherche
Apprentissage automatique médical
Apprentissage de représentations
Apprentissage multimodal
Apprentissage profond
Apprentissage spectral
Apprentissage sur graphes
Exploration des données
Modélisation moléculaire
Recherche d'information
Réseaux de neurones en graphes
Systèmes dynamiques
Théorie de l'apprentissage automatique

Biographie

Guy Wolf est professeur agrégé au Département de mathématiques et de statistique de l'Université de Montréal. Ses intérêts de recherche se situent au carrefour de l'apprentissage automatique, de la science des données et des mathématiques appliquées. Il s'intéresse particulièrement aux méthodes d'exploration de données qui utilisent l'apprentissage multiple et l'apprentissage géométrique profond, ainsi qu'aux applications pour l'analyse exploratoire des données biomédicales.

Ses recherches portent sur l'analyse exploratoire des données, avec des applications en bio-informatique. Ses approches sont multidisciplinaires et combinent l'apprentissage automatique, le traitement du signal et les outils mathématiques appliqués. En particulier, ses travaux récents utilisent une combinaison de géométries de diffusion et d'apprentissage profond pour trouver des modèles émergents, des dynamiques et des structures dans les mégadonnées à grande dimension (par exemple, dans la génomique et la protéomique de la cellule unique).

Étudiants actuels

Doctorat - UdeM
Collaborateur·rice de recherche - Yale University
Co-superviseur⋅e :
Collaborateur·rice alumni
Maîtrise recherche - Concordia
Superviseur⋅e principal⋅e :
Doctorat - Concordia
Superviseur⋅e principal⋅e :
Doctorat - UdeM
Doctorat - UdeM
Co-superviseur⋅e :
Maîtrise recherche - Concordia
Superviseur⋅e principal⋅e :
Doctorat - UdeM
Collaborateur·rice de recherche
Doctorat - UdeM
Co-superviseur⋅e :
Postdoctorat - Concordia
Superviseur⋅e principal⋅e :
Doctorat - UdeM
Doctorat - Concordia
Superviseur⋅e principal⋅e :
Maîtrise recherche - UdeM
Doctorat - UdeM
Superviseur⋅e principal⋅e :
Doctorat - UdeM
Maîtrise recherche - UdeM
Postdoctorat - UdeM
Co-superviseur⋅e :
Collaborateur·rice de recherche - McGill (assistant professor)

Publications

Circuit Discovery Helps To Detect LLM Jailbreaking
Despite extensive safety alignment, large language models (LLMs) remain vulnerable to jailbreak attacks that bypass safeguards to elicit har… (voir plus)mful content. While prior work attributes this vulnerability to safety training limitations, the internal mechanisms by which LLMs process adversarial prompts remain poorly understood. We present a mechanistic analysis of the jailbreaking behavior in a large-scale, safety-aligned LLM, focusing on LLaMA-2-7B-chat-hf. Leveraging edge attribution patching and subnetwork probing, we systematically identify computational circuits responsible for generating affirmative responses to jailbreak prompts. Ablating these circuits during the first token prediction can reduce attack success rates by up to 80\%, demonstrating its critical role in safety bypass. Our analysis uncovers key attention heads and MLP pathways that mediate adversarial prompt exploitation, revealing how important tokens propagate through these components to override safety constraints. These findings advance the understanding of adversarial vulnerabilities in aligned LLMs and pave the way for targeted, interpretable defenses mechanisms based on mechanistic interpretability.
Less is More: Undertraining Experts Improves Model Upcycling
Modern deep learning is increasingly characterized by the use of open-weight foundation models that can be fine-tuned on specialized dataset… (voir plus)s. This has led to a proliferation of expert models and adapters, often shared via platforms like HuggingFace and AdapterHub. To leverage these resources, numerous model upcycling methods have emerged, enabling the reuse of fine-tuned models in multi-task systems. A natural pipeline has thus formed to harness the benefits of transfer learning and amortize sunk training costs: models are pre-trained on general data, fine-tuned on specific tasks, and then upcycled into more general-purpose systems. A prevailing assumption is that improvements at one stage of this pipeline propagate downstream, leading to gains at subsequent steps. In this work, we challenge that assumption by examining how expert fine-tuning affects model upcycling. We show that long fine-tuning of experts that optimizes for their individual performance leads to degraded merging performance, both for fully fine-tuned and LoRA-adapted models, and to worse downstream results when LoRA adapters are upcycled into MoE layers. We trace this degradation to the memorization of a small set of difficult examples that dominate late fine-tuning steps and are subsequently forgotten during merging. Finally, we demonstrate that a task-dependent aggressive early stopping strategy can significantly improve upcycling performance.
Test Time Adaptation Using Adaptive Quantile Recalibration
Geometry aware graph attention networks to explain single-cell chromatin state and gene expression
Patrick Hanel
Anna Danese
Maria Colomé-Tatché
High-throughput measurements that profile the transcriptome or the epigenome of single-cells are becoming a common way to study cell identit… (voir plus)y. These data are high dimensional, sparse and non linear. Here we present SEAGALL (Single-cell Explainable Geometry-Aware Graph Attention Learning pipeLine), a hypothesis free method to extract biologically relevant features from single-cell experiments based on geometry regularised autoencoders (GRAE) and explainable graph attention networks (GAT). We use a GRAE to embed the data into a latent space preserving the data geometry and we construct a cell-to-cell graph computing distances in the GRAE bottleneck. Exploiting the attention mechanism to dynamically learn the relevant edges, we use GATs to classify the cells and we explain the predictions of the model with XAI methods to unravel the features which are driving cell identity beyond marker genes. We apply our method to data sets from scRNA-seq, scATAC-seq and scChIP-seq experiments. SEAGALL can extract cell type specific and stable signatures which not only differ from the ones found in classical linear approaches but are less biassed by coverage and high expression.
Recovering undersampled single-cell transcriptomes with HyperCell
Abstract

Single-cell transcriptomic technology has now matured, allowing quantification of mRNA transcripts corres… (voir plus)ponding to tens of thousands of genes within a cell. However, still only a small fraction of these mRNA is captured and measured by today’s single-cell assays. There are likely hundreds of thousands of mRNA copies present within a typical human cell, yet these assays omit a majority of the transcripts that are actually present. This introduces technical noise, especially non-biological variability and excessive sparsity, which frustrates downstream analysis and potentially skews biological conclusions. To overcome these challenges, we here develop HyperCell, a probabilistic deep learning approach that explicitly models this undersampling to produce estimates of each cell’s original gene transcript abundances across the whole transcriptome. We demonstrate that our framework offers benefits in various mRNA modeling settings, by i) correctly differentiating between spurious sampling-induced and real biological zeros, outperforming existing approaches, ii) estimating the total mRNA content of cells across states to reduce contamination due to background transcripts, iii) reducing contamination due to background transcripts, and iv) helping to counteract biases that may appear during typical differential gene expression analyses using widespread normalization approaches. Our approach to correcting for the technical noise introduced by the single-cell experimental process brings us closer to studying biology, starting from the true transcriptome of cells.

Graph Neural Networks Meet Probabilistic Graphical Models: A Survey
Principal Curvatures Estimation with Applications to Single Cell Data
Yanlei Zhang
Xingzhi Sun
Charles Xu
Kincaid MacDonald
Dhananjay Bhaskar
Bastian Rieck
Unsupervised Test-Time Adaptation for Hepatic Steatosis Grading Using Ultrasound B-Mode Images
Michael Eickenberg
An Tang
Guy Cloutier
Ultrasound (US) is considered a key modality for the clinical assessment of hepatic steatosis (i.e., fatty liver) due to its noninvasiveness… (voir plus) and availability. Deep learning methods have attracted considerable interest in this field, as they are capable of learning patterns in a collection of images and achieve clinically comparable levels of accuracy in steatosis grading. However, variations in patient populations, acquisition protocols, equipment, and operator expertise across clinical sites can introduce domain shifts that reduce model performance when applied outside the original training setting. In response, unsupervised domain adaptation techniques are being investigated to address these shifts, allowing models to generalize more effectively across diverse clinical environments. In this work, we propose a test-time batch normalization (TTN) technique designed to handle domain shift, especially for changes in label distribution, by adapting selected features of batch normalization (BatchNorm) layers in a trained convolutional neural network model. This approach operates in an unsupervised manner, allowing robust adaptation to new distributions without access to label data. The method was evaluated on two abdominal US datasets collected at different institutions, assessing its capability in mitigating domain shift for hepatic steatosis classification. The proposed method reduced the mean absolute error in steatosis grading by 37% and improved the area under the receiver operating characteristic curves (AUC) for steatosis detection from 0.78 to 0.97, compared to nonadapted models. These findings demonstrate the potential of the proposed method to address domain shift in US-based hepatic steatosis diagnosis, minimizing risks associated with deploying trained models in various clinical settings.
Data Visualization using Functional Data Analysis
Haozhe Chen
Andres Duque Correa
Kevin R. Moon
Data visualization via dimensionality reduction is an important tool in exploratory data analysis. However, when the data are noisy, many ex… (voir plus)isting methods fail to capture the underlying structure of the data. Furthermore, existing methods that can theoretically eliminate all noise are difficult to implement in high dimensions. Here we propose a new data visualization method called Functional Information Geometry (FIG) for dynamical processes that denoises the data by leveraging time information and mitigates the curse of dimensionality using approaches from functional data analysis. We experimentally demonstrate that FIG outperforms other methods in terms of capturing the true structure, hyperparameter robustness, and computational speed. We then use our method to visualize EEG brain measurements of sleep activity.
Towards Graph Foundation Models: A Study on the Generalization of Positional and Structural Encodings
Billy Joe Franks
Moshe Eliasof
Carola-Bibiane Schönlieb
Sophie Fellenz
Marius Kloft
Recent advances in integrating positional and structural encodings (PSEs) into graph neural networks (GNNs) have significantly enhanced thei… (voir plus)r performance across various graph learning tasks. However, the general applicability of these encodings and their potential to serve as foundational representations for graphs remain uncertain. This paper investigates the fine-tuning efficiency, scalability with sample size, and generalization capability of learnable PSEs across diverse graph datasets. Specifically, we evaluate their potential as universal pre-trained models that can be easily adapted to new tasks with minimal fine-tuning and limited data. Furthermore, we assess the expressivity of the learned representations, particularly, when used to augment downstream GNNs. We demonstrate through extensive benchmarking and empirical analysis that PSEs generally enhance downstream models. However, some datasets may require specific PSE-augmentations to achieve optimal performance. Nevertheless, our findings highlight their significant potential to become integral components of future graph foundation models. We provide new insights into the strengths and limitations of PSEs, contributing to the broader discourse on foundation models in graph learning.
Channel-Selective Normalization for Label-Shift Robust Test-Time Adaptation
An Tang
Guy Cloutier
Michael Eickenberg
Deep neural networks have useful applications in many different tasks, however their performance can be severely affected by changes in the … (voir plus)data distribution. For example, in the biomedical field, their performance can be affected by changes in the data (different machines, populations) between training and test datasets. To ensure robustness and generalization to real-world scenarios, test-time adaptation has been recently studied as an approach to adjust models to a new data distribution during inference. Test-time batch normalization is a simple and popular method that achieved compelling performance on domain shift benchmarks. It is implemented by recalculating batch normalization statistics on test batches. Prior work has focused on analysis with test data that has the same label distribution as the training data. However, in many practical applications this technique is vulnerable to label distribution shifts, sometimes producing catastrophic failure. This presents a risk in applying test time adaptation methods in deployment. We propose to tackle this challenge by only selectively adapting channels in a deep network, minimizing drastic adaptation that is sensitive to label shifts. Our selection scheme is based on two principles that we empirically motivate: (1) later layers of networks are more sensitive to label shift (2) individual features can be sensitive to specific classes. We apply the proposed technique to three classification tasks, including CIFAR10-C, Imagenet-C, and diagnosis of fatty liver, where we explore both covariate and label distribution shifts. We find that our method allows to bring the benefits of TTA while significantly reducing the risk of failure common in other methods, while being robust to choice in hyperparameters.
AdaFisher: Adaptive Second Order Optimization via Fisher Information
Damien Martins Gomes
Yanlei Zhang
Mahdi S. Hosseini
First-order optimization methods are currently the mainstream in training deep neural networks (DNNs). Optimizers like Adam incorporate limi… (voir plus)ted curvature information by employing the diagonal matrix preconditioning of the stochastic gradient during the training. Despite their widespread, second-order optimization algorithms exhibit superior convergence properties compared to their first-order counterparts e.g. Adam and SGD. However, their practicality in training DNNs is still limited due to increased per-iteration computations compared to the first-order methods. We present \emph{AdaFisher}--an adaptive second-order optimizer that leverages a \emph{diagonal block-Kronecker} approximation of the Fisher information matrix for adaptive gradient preconditioning. AdaFisher aims to bridge the gap between enhanced \emph{convergence/generalization} capabilities and computational efficiency in second-order optimization framework for training DNNs. Despite the slow pace of second-order optimizers, we showcase that AdaFisher can be reliably adopted for image classification, language modeling and stands out for its stability and robustness in hyper-parameter tuning. We demonstrate that AdaFisher \textbf{outperforms the SOTA optimizers} in terms of both accuracy and convergence speed. Code is available from https://github.com/AtlasAnalyticsLab/AdaFisher.