Portrait de Tal Arbel

Tal Arbel

Membre académique principal
Chaire en IA Canada-CIFAR
Professeure titulaire, McGill University, Département de génie électrique et informatique

Biographie

Tal Arbel est professeure titulaire au Département de génie électrique et informatique de l’Université McGill, où elle dirige le groupe de vision probabiliste et le laboratoire d'imagerie médicale du Centre sur les machines intelligentes.

Elle est titulaire d'une chaire en IA Canada-CIFAR et membre associée de Mila – Institut québécois d’intelligence artificielle ainsi que du Centre de recherche sur le cancer Goodman. Les recherches de la professeure Arbel portent sur le développement de méthodes probabilistes d'apprentissage profond dans les domaines de la vision par ordinateur et de l’analyse d'imagerie médicale pour un large éventail d'applications dans le monde réel, avec un accent particulier sur les maladies neurologiques.

Elle a remporté le prix de la recherche Christophe Pierre 2019 de McGill Engineering. Elle fait régulièrement partie de l'équipe organisatrice de grandes conférences internationales sur la vision par ordinateur et l'analyse d'imagerie médicale (par exemple celles de la Medical Image Computing and Computer-Assisted Intervention Society/MICCAI et de Medical Imaging with Deep Learning/MIDL, l’International Conference on Computer Vision/ICCV ou encore la Conference on Computer Vision and Pattern Recognition/CVPR). Elle est rédactrice en chef et cofondatrice de la revue Machine Learning for Biomedical Imaging (MELBA).

Étudiants actuels

Maîtrise recherche - McGill University
Maîtrise recherche - McGill University
Doctorat - McGill University
Maîtrise recherche - McGill University
Maîtrise recherche - McGill University
Maîtrise recherche - McGill University
Maîtrise recherche - McGill University
Maîtrise recherche - McGill University

Publications

Metrics Reloaded - A new recommendation framework for biomedical image analysis validation
Annika Reinke
Lena Maier-Hein
Evangelia Christodoulou
Ben Glocker
Patrick Scholz
Fabian Isensee
Jens Kleesiek
Michal Kozubek
Mauricio Reyes
Michael Alexander Riegler
Manuel Wiesenfarth
Michael Baumgartner
Matthias Eisenmann
Doreen Heckmann-Notzel
Ali Emre Kavur
Tim Radsch
Minu D. Tizabi
Laura Acion
Michela Antonelli
Spyridon Bakas
Peter Bankhead
Arriel Benis
M. Jorge Cardoso
Veronika Cheplygina
Beth A Cimini
Gary S. Collins
Keyvan Farahani
Bram van Ginneken
Fred A Hamprecht
Daniel A. Hashimoto
Michael M. Hoffman
Merel Huisman
Pierre Jannin
Charles Kahn
Alexandros Karargyris
Alan Karthikesalingam
Hannes Kenngott
Annette Kopp-Schneider
Anna Kreshuk
Tahsin Kurc
Bennett Landman
Geert Litjens
Amin Madani
Klaus Maier-Hein
Anne Martel
Peter Mattson
Erik Meijering
Bjoern Menze
David Moher
Karel G.M. Moons
Henning Müller
Brennan Nichyporuk
Felix Nickel
Jens Petersen
Nasir Rajpoot
Nicola Rieke
Julio Saez-Rodriguez
Clara I. Sánchez
Shravya Shetty
Maarten van Smeden
Carole H. Sudre
Ronald M. Summers
Abdel A. Taha
Sotirios A. Tsaftaris
Ben Van Calster
Gael Varoquaux
Paul F Jaeger
Meaningful performance assessment of biomedical image analysis algorithms depends on objective and appropriate performance metrics. There ar… (voir plus)e major shortcomings in the current state of the art. Yet, so far limited attention has been paid to practical pitfalls associated when using particular metrics for image analysis tasks. Therefore, a number of international initiatives have collaborated to offer researchers with guidance and tools for selecting performance metrics in a problem-aware manner. In our proposed framework, the characteristics of the given biomedical problem are first captured in a problem fingerprint, which identifies properties related to domain interests, the target structure(s), the input datasets, and algorithm output. A problem category-specific mapping is applied in the second step to match fingerprints to metrics that reflect domain requirements. Based on input from experts from more than 60 institutions worldwide, we believe our metric recommendation framework to be useful to the MIDL community and to enhance the quality of biomedical image analysis algorithm validation.
Deep Learning Prediction of Response to Disease Modifying Therapy in Primary Progressive Multiple Sclerosis (P1-1.Virtual)
Jean-Pierre R. Falet
Joshua D. Durso-Finley
Brennan Nichyporuk
Julien Schroeter
Francesca Bovis
Maria-Pia Sormani
Douglas Arnold
Personalized Prediction of Future Lesion Activity and Treatment Effect in Multiple Sclerosis from Baseline MRI
Joshua D. Durso-Finley
Jean-Pierre R. Falet
Brennan Nichyporuk
Douglas Arnold
Precision medicine for chronic diseases such as multiple sclerosis (MS) involves choosing a treatment which best balances efficacy and side … (voir plus)effects/preferences for individual patients. Making this choice as early as possible is important, as delays in finding an effective therapy can lead to irreversible disability accrual. To this end, we present the first deep neural network model for individualized treatment decisions from baseline magnetic resonance imaging (MRI) (with clinical information if available) for MS patients which (a) predicts future new and enlarging T2 weighted (NE-T2) lesion counts on follow-up MRI on multiple treatments and (b) estimates the conditional average treatment effect (CATE), as defined by the predicted future suppression of NE-T2 lesions, between different treatment options relative to placebo. Our model is validated on a proprietary federated dataset of 1817 multi-sequence MRIs acquired from MS patients during four multi-centre randomized clinical trials. Our framework achieves high average precision in the binarized regression of future NE-T2 lesions on five different treatments, identifies heterogeneous treatment effects, and provides a personalized treatment recommendation that accounts for treatment-associated risk (side effects, patient preference, administration difficulties).
On Learning Fairness and Accuracy on Multiple Subgroups
Changjian Shui
Gezheng Xu
Qi CHEN
Jiaqi Li
Charles Ling
Boyu Wang
We propose an analysis in fair learning that preserves the utility of the data while reducing prediction disparities under the criteria of g… (voir plus)roup sufficiency. We focus on the scenario where the data contains multiple or even many subgroups, each with limited number of samples. As a result, we present a principled method for learning a fair predictor for all subgroups via formulating it as a bilevel objective. Specifically, the subgroup specific predictors are learned in the lower-level through a small amount of data and the fair predictor. In the upper-level, the fair predictor is updated to be close to all subgroup specific predictors. We further prove that such a bilevel objective can effectively control the group sufficiency and generalization error. We evaluate the proposed framework on real-world datasets. Empirical evidence suggests the consistently improved fair predictions, as well as the comparable accuracy to the baselines.
Estimating treatment effect for individuals with progressive multiple sclerosis using deep learning
JR Falet
Joshua D. Durso-Finley
Brennan Nichyporuk
Jan Schroeter
Francesca Bovis
Maria-Pia Sormani
Douglas Arnold
Cohort Bias Adaptation in Aggregated Datasets for Lesion Segmentation
Brennan Nichyporuk
Jillian L. Cardinell
Justin Szeto
Raghav Mehta
Sotirios A. Tsaftaris
Douglas Arnold
HAD-Net: A Hierarchical Adversarial Knowledge Distillation Network for Improved Enhanced Tumour Segmentation Without Post-Contrast Images
Saverio Vadacchino
Raghav Mehta
Nazanin Mohammadi Sepahvand
Brennan Nichyporuk
James J. Clark
Segmentation of enhancing tumours or lesions from MRI is important for detecting new disease activity in many clinical contexts. However, ac… (voir plus)curate segmentation requires the inclusion of medical images (e.g., T1 post-contrast MRI) acquired after injecting patients with a contrast agent (e.g., Gadolinium), a process no longer thought to be safe. Although a number of modality-agnostic segmentation networks have been developed over the past few years, they have been met with limited success in the context of enhancing pathology segmentation. In this work, we present HAD-Net, a novel offline adversarial knowledge distillation (KD) technique, whereby a pre-trained teacher segmentation network, with access to all MRI sequences, teaches a student network, via hierarchical adversarial training, to better overcome the large domain shift presented when crucial images are absent during inference. In particular, we apply HAD-Net to the challenging task of enhancing tumour segmentation when access to post-contrast imaging is not available. The proposed network is trained and tested on the BraTS 2019 brain tumour segmentation challenge dataset, where it achieves performance improvements in the ranges of 16% - 26% over (a) recent modality-agnostic segmentation methods (U-HeMIS, U-HVED), (b) KD-Net adapted to this problem, (c) the pre-trained student network and (d) a non-hierarchical version of the network (AD-Net), in terms of Dice scores for enhancing tumour (ET). The network also shows improvements in tumour core (TC) Dice scores. Finally, the network outperforms both the baseline student network and AD-Net in terms of uncertainty quantification for enhancing tumour segmentation based on the BraTS 2019 uncertainty challenge metrics. Our code is publicly available at: https://github.com/SaverioVad/HAD_Net
Common limitations of performance metrics in biomedical image analysis
Annika Reinke
Matthias Eisenmann
Minu Dietlinde Tizabi
Carole H. Sudre
Tim Radsch
Michela Antonelli
Spyridon Bakas
M. Jorge Cardoso
Veronika Cheplygina
Keyvan Farahani
Ben Glocker
Doreen Heckmann-Notzel
Fabian Isensee
Pierre Jannin
Charles Kahn
Jens Kleesiek
Tahsin Kurc
Michal Kozubek
Bennett Landman … (voir 15 de plus)
Geert Litjens
Klaus Maier-Hein
Anne Lousise Martel
Bjoern Menze
Henning Müller
Jens Petersen
Mauricio Reyes
Nicola Rieke
Bram Stieltjes
Ronald M. Summers
Sotirios A. Tsaftaris
Bram van Ginneken
Annette Kopp-Schneider
Paul Jäger
Lena Maier-Hein
Optimizing Operating Points for High Performance Lesion Detection and Segmentation Using Lesion Size Reweighting
Brennan Nichyporuk
Justin Szeto
Douglas Arnold
There are many clinical contexts which require accurate detection and segmentation of all focal pathologies (e.g. lesions, tumours) in patie… (voir plus)nt images. In cases where there are a mix of small and large lesions, standard binary cross entropy loss will result in better segmentation of large lesions at the expense of missing small ones. Adjusting the operating point to accurately detect all lesions generally leads to oversegmentation of large lesions. In this work, we propose a novel reweighing strategy to eliminate this performance gap, increasing small pathology detection performance while maintaining segmentation accuracy. We show that our reweighing strategy vastly outperforms competing strategies based on experiments on a large scale, multi-scanner, multi-center dataset of Multiple Sclerosis patient images.
Common Limitations of Image Processing Metrics: A Picture Story
Annika Reinke
Matthias Eisenmann
Minu Dietlinde Tizabi
Carole H. Sudre
Tim Radsch
Michela Antonelli
Spyridon Bakas
M. Cardoso
Veronika Cheplygina
Keyvan Farahani
B. Glocker
Doreen Heckmann-Notzel
Fabian Isensee
Pierre Jannin
Charles E. Jr. Kahn
Jens Kleesiek
Tahsin Kurc
Michal Kozubek
Bennett Landman … (voir 14 de plus)
G. Litjens
Klaus Maier-Hein
Bjoern Menze
Henning Müller
Jens Petersen
Mauricio Reyes
Nicola Rieke
Bram Stieltjes
R. Summers
Sotirios A. Tsaftaris
Bram van Ginneken
Annette Kopp-Schneider
Paul F. Jäger
Lena Maier-Hein
Task dependent deep LDA pruning of neural networks
Qing Tian
James J. Clark
Accounting for Variance in Machine Learning Benchmarks
Xavier Bouthillier
Pierre Delaunay
Mirko Bronzi
Assya Trofimov
Brennan Nichyporuk
Justin Szeto
Naz Sepah
Edward Raff
Kanika Madan
Vikram Voleti
Vincent Michalski
Dmitriy Serdyuk
Gael Varoquaux
Strong empirical evidence that one machine-learning algorithm A outperforms another one B ideally calls for multiple trials optimizing the l… (voir plus)earning pipeline over sources of variation such as data sampling, data augmentation, parameter initialization, and hyperparameters choices. This is prohibitively expensive, and corners are cut to reach conclusions. We model the whole benchmarking process, revealing that variance due to data sampling, parameter initialization and hyperparameter choice impact markedly the results. We analyze the predominant comparison methods used today in the light of this variance. We show a counter-intuitive result that adding more sources of variation to an imperfect estimator approaches better the ideal estimator at a 51 times reduction in compute cost. Building on these results, we study the error rate of detecting improvements, on five different deep-learning tasks/architectures. This study leads us to propose recommendations for performance comparisons.