Portrait of Tal Arbel

Tal Arbel

Core Academic Member
Canada CIFAR AI Chair
Full Professor, McGill University, Department of Electrical and Computer Engineering
Research Topics
Causality
Computer Vision
Deep Learning
Generative Models
Medical Machine Learning
Probabilistic Models
Representation Learning

Biography

Tal Arbel is a professor in the Department of Electrical and Computer Engineering at McGill University, where she is the director of the Probabilistic Vision Group and Medical Imaging Lab in the Centre for Intelligent Machines.

She is also a Canada CIFAR AI Chair, an associate academic member of Mila – Quebec Artificial Intelligence Institute and an associate member of the Goodman Cancer Research Centre.

Arbel’s research focuses on the development of probabilistic deep learning methods in computer vision and medical image analysis for a wide range of real-world applications, with a focus on neurological diseases.

She is a recipient of the 2019 McGill Engineering Christophe Pierre Research Award and a Fellow of the Canadian Academy of Engineering. She regularly serves on the organizing team of major international conferences in computer vision and in medical image analysis (e.g. MICCAI, MIDL, ICCV, CVPR). She is currently the Editor-in-Chief and co-founder of the arXiv overlay journal: Machine Learning for Biomedical Imaging (MELBA).

Current Students

Master's Research - McGill University
Master's Research - McGill University
Master's Research - McGill University
PhD - McGill University
Master's Research - McGill University
Undergraduate - McGill University
Master's Research - McGill University
Master's Research - McGill University
Master's Research - McGill University
Master's Research - McGill University
Undergraduate - McGill University
Undergraduate - McGill University

Publications

Language-Guided Trajectory Traversal in Disentangled Stable Diffusion Latent Space for Factorized Medical Image Generation
Zahra Tehraninasab
Amar Kumar
Language-Guided Trajectory Traversal in Disentangled Stable Diffusion Latent Space for Factorized Medical Image Generation
Zahra Tehraninasab
Amar Kumar
Leveraging Vision-Language Foundation Models to Reveal Hidden Image-Attribute Relationships in Medical Imaging
Amar Kumar
Anita Kriz
B. Pertzov
Leveraging Vision-Language Foundation Models to Reveal Hidden Image-Attribute Relationships in Medical Imaging
Amar Kumar
Anita Kriz
B. Pertzov
Conditional Diffusion Models are Medical Image Classifiers that Provide Explainability and Uncertainty for Free
Gian Mario Favero
Parham Saremi
Emily Kaczmarek
Brennan Nichyporuk
PRISM: High-Resolution&Precise Counterfactual Medical Image Generation using Language-guided Stable Diffusion
Amar Kumar
Anita Kriz
Mohammad Havaei
Developing reliable and generalizable deep learning systems for medical imaging faces significant obstacles due to spurious correlations, da… (see more)ta imbalances, and limited text annotations in datasets. Addressing these challenges requires architectures robust to the unique complexities posed by medical imaging data. The rapid advancements in vision-language foundation models within the natural image domain prompt the question of how they can be adapted for medical imaging tasks. In this work, we present PRISM, a framework that leverages foundation models to generate high-resolution, language-guided medical image counterfactuals using Stable Diffusion. Our approach demonstrates unprecedented precision in selectively modifying spurious correlations (the medical devices) and disease features, enabling the removal and addition of specific attributes while preserving other image characteristics. Through extensive evaluation, we show how PRISM advances counterfactual generation and enables the development of more robust downstream classifiers for clinically deployable solutions. To facilitate broader adoption and research, we make our code publicly available at https://github.com/Amarkr1/PRISM.
RL4Med-DDPO: Reinforcement Learning for Controlled Guidance Towards Diverse Medical Image Generation using Vision-Language Foundation Models
Parham Saremi
Amar Kumar
Mohammed Mohammed
Zahra Tehraninasab
PRISM: High-Resolution & Precise Counterfactual Medical Image Generation using Language-guided Stable Diffusion
Amar Kumar
Anita Kriz
Mohammad Havaei
Developing reliable and generalizable deep learning systems for medical imaging faces significant obstacles due to spurious correlations, da… (see more)ta imbalances, and limited text annotations in datasets. Addressing these challenges requires architectures robust to the unique complexities posed by medical imaging data. The rapid advancements in vision-language foundation models within the natural image domain prompt the question of how they can be adapted for medical imaging tasks. In this work, we present PRISM, a framework that leverages foundation models to generate high-resolution, language-guided medical image counterfactuals using Stable Diffusion. Our approach demonstrates unprecedented precision in selectively modifying spurious correlations (the medical devices) and disease features, enabling the removal and addition of specific attributes while preserving other image characteristics. Through extensive evaluation, we show how PRISM advances counterfactual generation and enables the development of more robust downstream classifiers for clinically deployable solutions. To facilitate broader adoption and research, we make our code publicly available at https://github.com/Amarkr1/PRISM.
RL4Med-DDPO: Reinforcement Learning for Controlled Guidance Towards Diverse Medical Image Generation using Vision-Language Foundation Models
Parham Saremi
Amar Kumar
Mohammed Mohammed
Zahra Tehraninasab
PRISM: High-Resolution&Precise Counterfactual Medical Image Generation using Language-guided Stable Diffusion
Amar Kumar
Anita Kriz
Mohammad Havaei
Developing reliable and generalizable deep learning systems for medical imaging faces significant obstacles due to spurious correlations, da… (see more)ta imbalances, and limited text annotations in datasets. Addressing these challenges requires architectures robust to the unique complexities posed by medical imaging data. The rapid advancements in vision-language foundation models within the natural image domain prompt the question of how they can be adapted for medical imaging tasks. In this work, we present PRISM, a framework that leverages foundation models to generate high-resolution, language-guided medical image counterfactuals using Stable Diffusion. Our approach demonstrates unprecedented precision in selectively modifying spurious correlations (the medical devices) and disease features, enabling the removal and addition of specific attributes while preserving other image characteristics. Through extensive evaluation, we show how PRISM advances counterfactual generation and enables the development of more robust downstream classifiers for clinically deployable solutions. To facilitate broader adoption and research, we make our code publicly available at https://github.com/Amarkr1/PRISM.
Conditional Diffusion Models are Medical Image Classifiers that Provide Explainability and Uncertainty for Free
Gian Mario Favero
Parham Saremi
Emily Kaczmarek
Brennan Nichyporuk
Discriminative classifiers have become a foundational tool in deep learning for medical imaging, excelling at learning separable features of… (see more) complex data distributions. However, these models often need careful design, augmentation, and training techniques to ensure safe and reliable deployment. Recently, diffusion models have become synonymous with generative modeling in 2D. These models showcase robustness across a range of tasks including natural image classification, where classification is performed by comparing reconstruction errors across images generated for each possible conditioning input. This work presents the first exploration of the potential of class conditional diffusion models for 2D medical image classification. First, we develop a novel majority voting scheme shown to improve the performance of medical diffusion classifiers. Next, extensive experiments on the CheXpert and ISIC Melanoma skin cancer datasets demonstrate that foundation and trained-from-scratch diffusion models achieve competitive performance against SOTA discriminative classifiers without the need for explicit supervision. In addition, we show that diffusion classifiers are intrinsically explainable, and can be used to quantify the uncertainty of their predictions, increasing their trustworthiness and reliability in safety-critical, clinical contexts. Further information is available on our project page: https://faverogian.github.io/med-diffusion-classifier.github.io/
The role of AI for MRI-analysis in multiple sclerosis—A brief overview
Jean-Pierre R. Falet
Steven Nobile
Aliya Szpindel
Berardino Barile
Amar Kumar
Joshua D. Durso-Finley
Douglas Arnold