Portrait de Tal Arbel

Tal Arbel

Membre académique principal
Chaire en IA Canada-CIFAR
Professeure titulaire, McGill University, Département de génie électrique et informatique
Sujets de recherche
Apprentissage automatique médical
Apprentissage de représentations
Apprentissage profond
Causalité
Modèles génératifs
Modèles probabilistes
Vision par ordinateur

Biographie

Tal Arbel est professeure titulaire au Département de génie électrique et informatique de l’Université McGill, où elle dirige le groupe de vision probabiliste et le laboratoire d'imagerie médicale du Centre sur les machines intelligentes.

Elle est titulaire d'une chaire en IA Canada-CIFAR et membre associée de Mila – Institut québécois d’intelligence artificielle ainsi que du Centre de recherche sur le cancer Goodman. Les recherches de la professeure Arbel portent sur le développement de méthodes probabilistes d'apprentissage profond dans les domaines de la vision par ordinateur et de l’analyse d'imagerie médicale pour un large éventail d'applications dans le monde réel, avec un accent particulier sur les maladies neurologiques.

Elle a remporté le prix de la recherche Christophe Pierre 2019 de McGill Engineering et est Fellow à l'Académie canadienne d'ingénierie. Elle fait régulièrement partie de l'équipe organisatrice de grandes conférences internationales sur la vision par ordinateur et l'analyse d'imagerie médicale (par exemple celles de la Medical Image Computing and Computer-Assisted Intervention Society/MICCAI et de Medical Imaging with Deep Learning/MIDL, l’International Conference on Computer Vision/ICCV ou encore la Conference on Computer Vision and Pattern Recognition/CVPR). Elle est rédactrice en chef et cofondatrice de la revue Machine Learning for Biomedical Imaging (MELBA).

Étudiants actuels

Doctorat - McGill
Collaborateur·rice alumni - McGill
Collaborateur·rice de recherche - McGill University
Maîtrise recherche - McGill
Doctorat - McGill
Doctorat - McGill
Maîtrise recherche - McGill
Collaborateur·rice de recherche - N/A
Baccalauréat - McGill
Maîtrise recherche - McGill
Maîtrise recherche - McGill
Maîtrise recherche - McGill
Maîtrise recherche - McGill
Collaborateur·rice de recherche - UBC

Publications

Understanding metric-related pitfalls in image analysis validation
Annika Reinke
Minu Dietlinde Tizabi
Michael Baumgartner
Matthias Eisenmann
DOREEN HECKMANN-NÖTZEL
A. EMRE KAVUR
TIM RÄDSCH
Carole H. Sudre
LAURA ACION
Michela Antonelli
Spyridon Bakas
Allison Benis
Arriel Benis
Matthew Blaschko
FLORIAN BUETTNER
M. Jorge Cardoso
Veronika Cheplygina
JIANXU CHEN
Evangelia Christodoulou … (voir 59 de plus)
BETH A. CIMINI
Gary S. Collins
Keyvan Farahani
LUCIANA FERRER
Adrian Galdran
Bram van Ginneken
Ben Glocker
PATRICK GODAU
Robert Cary Haase
Daniel A. Hashimoto
Michael M. Hoffman
Merel Huisman
Fabian Isensee
Pierre Jannin
CHARLES E. KAHN
DAGMAR KAINMUELLER
BERNHARD KAINZ
Alexandros Karargyris
Alan Karthikesalingam
H. Kenngott
Jens Kleesiek
Florian Kofler
THIJS KOOI
Annette Kopp-Schneider
Michal Kozubek
Anna Kreshuk
Tahsin Kurc
BENNETT A. LANDMAN
GEERT LITJENS
Amin Madani
Klaus Maier-Hein
Anne L. Martel
Peter Mattson
ERIK MEIJERING
Bjoern Menze
KAREL G.M. MOONS
Henning Müller
Felix Nickel
Jens Petersen
SUSANNE M. RAFELSKI
NASIR RAJPOOT
Mauricio Reyes
MICHAEL A. RIEGLER
Nicola Rieke
Julio Saez-Rodriguez
Clara I. Sánchez
SHRAVYA SHETTY
M. Smeden
Ronald M. Summers
Abdel Aziz Taha
ALEKSEI TIULPIN
Sotirios A. Tsaftaris
Ben Van Calster
Gael Varoquaux
Manuel Wiesenfarth
ZIV R. YANIV
PAUL F. JÄGER
Lena Maier-Hein
DeCoDEx: Confounder Detector Guidance for Improved Diffusion-based Counterfactual Explanations
Deep learning classifiers are prone to latching onto dominant confounders present in a dataset rather than on the causal markers associated … (voir plus)with the target class, leading to poor generalization and biased predictions. Although explainability via counterfactual image generation has been successful at exposing the problem, bias mitigation strategies that permit accurate explainability in the presence of dominant and diverse artifacts remain unsolved. In this work, we propose the DeCoDEx framework and show how an external, pre-trained binary artifact detector can be leveraged during inference to guide a diffusion-based counterfactual image generator towards accurate explainability. Experiments on the CheXpert dataset, using both synthetic artifacts and real visual artifacts (support devices), show that the proposed method successfully synthesizes the counterfactual images that change the causal pathology markers associated with Pleural Effusion while preserving or ignoring the visual artifacts. Augmentation of ERM and Group-DRO classifiers with the DeCoDEx generated images substantially improves the results across underrepresented groups that are out of distribution for each class. The code is made publicly available at https://github.com/NimaFathi/DeCoDEx.
Current AI applications in neurology: Brain imaging
Joshua D. Durso-Finley
Jean-Pierre R. Falet
Raghav Mehta
Douglas Arnold
Nick Pawlowski
Debiasing Counterfactuals in the Presence of Spurious Correlations
Raghav Mehta
Jean-Pierre R. Falet
Sotirios A. Tsaftaris
Deep learning models can perform well in complex medical imaging classification tasks, even when basing their conclusions on spurious correl… (voir plus)ations (i.e. confounders), should they be prevalent in the training dataset, rather than on the causal image markers of interest. This would thereby limit their ability to generalize across the population. Explainability based on counterfactual image generation can be used to expose the confounders but does not provide a strategy to mitigate the bias. In this work, we introduce the first end-to-end training framework that integrates both (i) popular debiasing classifiers (e.g. distributionally robust optimization (DRO)) to avoid latching onto the spurious correlations and (ii) counterfactual image generation to unveil generalizable imaging markers of relevance to the task. Additionally, we propose a novel metric, Spurious Correlation Latching Score (SCLS), to quantify the extent of the classifier reliance on the spurious correlation as exposed by the counterfactual images. Through comprehensive experiments on two public datasets (with the simulated and real visual artifacts), we demonstrate that the debiasing method: (i) learns generalizable markers across the population, and (ii) successfully ignores spurious correlations and focuses on the underlying disease pathology.
Improving Image-Based Precision Medicine with Uncertainty-Aware Causal Models
Joshua D. Durso-Finley
Jean-Pierre R. Falet
Raghav Mehta
Douglas Arnold
Nick Pawlowski
Image-based precision medicine aims to personalize treatment decisions based on an individual's unique imaging features so as to improve the… (voir plus)ir clinical outcome. Machine learning frameworks that integrate uncertainty estimation as part of their treatment recommendations would be safer and more reliable. However, little work has been done in adapting uncertainty estimation techniques and validation metrics for precision medicine. In this paper, we use Bayesian deep learning for estimating the posterior distribution over factual and counterfactual outcomes on several treatments. This allows for estimating the uncertainty for each treatment option and for the individual treatment effects (ITE) between any two treatments. We train and evaluate this model to predict future new and enlarging T2 lesion counts on a large, multi-center dataset of MR brain images of patients with multiple sclerosis, exposed to several treatments during randomized controlled trials. We evaluate the correlation of the uncertainty estimate with the factual error, and, given the lack of ground truth counterfactual outcomes, demonstrate how uncertainty for the ITE prediction relates to bounds on the ITE error. Lastly, we demonstrate how knowledge of uncertainty could modify clinical decision-making to improve individual patient and clinical trial outcomes.
Mitigating Calibration Bias Without Fixed Attribute Grouping for Improved Fairness in Medical Imaging Analysis
Changjian Shui
Raghav Mehta
Douglas Arnold
Grow-push-prune: Aligning deep discriminants for effective structural network compression
Qing Tian
James J. Clark
Medical SAM Adapter: Adapting Segment Anything Model for Medical Image Segmentation
Junde Wu
Rao Fu
Huihui Fang
Yuanpei Liu
Zhao-Yang Wang
Yanwu Xu
Yueming Jin
The Segment Anything Model (SAM) has recently gained popularity in the field of image segmentation due to its impressive capabilities in var… (voir plus)ious segmentation tasks and its prompt-based interface. However, recent studies and individual experiments have shown that SAM underperforms in medical image segmentation, since the lack of the medical specific knowledge. This raises the question of how to enhance SAM's segmentation capability for medical images. In this paper, instead of fine-tuning the SAM model, we propose the Medical SAM Adapter (Med-SA), which incorporates domain-specific medical knowledge into the segmentation model using a light yet effective adaptation technique. In Med-SA, we propose Space-Depth Transpose (SD-Trans) to adapt 2D SAM to 3D medical images and Hyper-Prompting Adapter (HyP-Adpt) to achieve prompt-conditioned adaptation. We conduct comprehensive evaluation experiments on 17 medical image segmentation tasks across various image modalities. Med-SA outperforms several state-of-the-art (SOTA) medical image segmentation methods, while updating only 2\% of the parameters. Our code is released at https://github.com/KidsWithTokens/Medical-SAM-Adapter.
Evaluating the Fairness of Deep Learning Uncertainty Estimates in Medical Image Analysis
Raghav Mehta
Changjian Shui
Although deep learning (DL) models have shown great success in many medical image analysis tasks, deployment of the resulting models into r… (voir plus)eal clinical contexts requires: (1) that they exhibit robustness and fairness across different sub-populations, and (2) that the confidence in DL model predictions be accurately expressed in the form of uncertainties. Unfortunately, recent studies have indeed shown significant biases in DL models across demographic subgroups (e.g., race, sex, age) in the context of medical image analysis, indicating a lack of fairness in the models. Although several methods have been proposed in the ML literature to mitigate a lack of fairness in DL models, they focus entirely on the absolute performance between groups without considering their effect on uncertainty estimation. In this work, we present the first exploration of the effect of popular fairness models on overcoming biases across subgroups in medical image analysis in terms of bottom-line performance, and their effects on uncertainty quantification. We perform extensive experiments on three different clinically relevant tasks: (i) skin lesion classification, (ii) brain tumour segmentation, and (iii) Alzheimer's disease clinical score regression. Our results indicate that popular ML methods, such as data-balancing and distributionally robust optimization, succeed in mitigating fairness issues in terms of the model performances for some of the tasks. However, this can come at the cost of poor uncertainty estimates associated with the model predictions. This tradeoff must be mitigated if fairness models are to be adopted in medical image analysis.
Evaluating the Fairness of Deep Learning Uncertainty Estimates in Medical Image Analysis
Raghav Mehta
Changjian Shui
Personalized Prediction of Future Lesion Activity and Treatment Effect in Multiple Sclerosis from Baseline MRI
Joshua D. Durso-Finley
Jean-Pierre R. Falet
Douglas Arnold
Precision medicine for chronic diseases such as multiple sclerosis (MS) involves choosing a treatment which best balances efficacy and side … (voir plus)effects/preferences for individual patients. Making this choice as early as possible is important, as delays in finding an effective therapy can lead to irreversible disability accrual. To this end, we present the first deep neural network model for individualized treatment decisions from baseline magnetic resonance imaging (MRI) (with clinical information if available) for MS patients which (a) predicts future new and enlarging T2 weighted (NE-T2) lesion counts on follow-up MRI on multiple treatments and (b) estimates the conditional average treatment effect (CATE), as defined by the predicted future suppression of NE-T2 lesions, between different treatment options relative to placebo. Our model is validated on a proprietary federated dataset of 1817 multi-sequence MRIs acquired from MS patients during four multi-centre randomized clinical trials. Our framework achieves high average precision in the binarized regression of future NE-T2 lesions on five different treatments, identifies heterogeneous treatment effects, and provides a personalized treatment recommendation that accounts for treatment-associated risk (side effects, patient preference, administration difficulties,...).
Segmentation-Consistent Probabilistic Lesion Counting
Julien Schroeter
Chelsea Myers-Colet
Douglas Arnold
Lesion counts are important indicators of disease severity, patient prognosis, and treatment efficacy, yet counting as a task in medical ima… (voir plus)ging is often overlooked in favor of segmentation. This work introduces a novel continuously differentiable function that maps lesion segmentation predictions to lesion count probability distributions in a consistent manner. The proposed end-to-end approach—which consists of voxel clustering, lesion-level voxel probability aggregation, and Poisson-binomial counting—is non-parametric and thus offers a robust and consistent way to augment lesion segmentation models with post hoc counting capabilities. Experiments on Gadolinium-enhancing lesion counting demonstrate that our method outputs accurate and well-calibrated count distributions that capture meaningful uncertainty information. They also reveal that our model is suitable for multi-task learning of lesion segmentation, is efficient in low data regimes, and is robust to adversarial attacks.