Portrait de Tal Arbel

Tal Arbel

Membre académique principal
Chaire en IA Canada-CIFAR
Professeure titulaire, McGill University, Département de génie électrique et informatique
Sujets de recherche
Apprentissage automatique médical
Apprentissage de représentations
Apprentissage profond
Causalité
Modèles génératifs
Modèles probabilistes
Vision par ordinateur

Biographie

Tal Arbel est professeure titulaire au Département de génie électrique et informatique de l’Université McGill, où elle dirige le groupe de vision probabiliste et le laboratoire d'imagerie médicale du Centre sur les machines intelligentes.

Elle est titulaire d'une chaire en IA Canada-CIFAR et membre associée de Mila – Institut québécois d’intelligence artificielle ainsi que du Centre de recherche sur le cancer Goodman. Les recherches de la professeure Arbel portent sur le développement de méthodes probabilistes d'apprentissage profond dans les domaines de la vision par ordinateur et de l’analyse d'imagerie médicale pour un large éventail d'applications dans le monde réel, avec un accent particulier sur les maladies neurologiques.

Elle a remporté le prix de la recherche Christophe Pierre 2019 de McGill Engineering. Elle fait régulièrement partie de l'équipe organisatrice de grandes conférences internationales sur la vision par ordinateur et l'analyse d'imagerie médicale (par exemple celles de la Medical Image Computing and Computer-Assisted Intervention Society/MICCAI et de Medical Imaging with Deep Learning/MIDL, l’International Conference on Computer Vision/ICCV ou encore la Conference on Computer Vision and Pattern Recognition/CVPR). Elle est rédactrice en chef et cofondatrice de la revue Machine Learning for Biomedical Imaging (MELBA).

Étudiants actuels

Postdoctorat - McGill
Doctorat - McGill
Maîtrise recherche - McGill
Maîtrise recherche - McGill
Maîtrise recherche - McGill
Maîtrise recherche - McGill
Doctorat - McGill
Maîtrise recherche - McGill
Maîtrise recherche - McGill
Maîtrise recherche - McGill
Maîtrise recherche - McGill
Baccalauréat - McGill
Baccalauréat - McGill

Publications

Accounting for Variance in Machine Learning Benchmarks
Xavier Bouthillier
Pierre Delaunay
Mirko Bronzi
Assya Trofimov
Brennan Nichyporuk
Justin Szeto
Naz Sepah
Edward Raff
Kanika Madan
Vikram Voleti
Vincent Michalski
Dmitriy Serdyuk
Gael Varoquaux
Strong empirical evidence that one machine-learning algorithm A outperforms another one B ideally calls for multiple trials optimizing the l… (voir plus)earning pipeline over sources of variation such as data sampling, data augmentation, parameter initialization, and hyperparameters choices. This is prohibitively expensive, and corners are cut to reach conclusions. We model the whole benchmarking process, revealing that variance due to data sampling, parameter initialization and hyperparameter choice impact markedly the results. We analyze the predominant comparison methods used today in the light of this variance. We show a counter-intuitive result that adding more sources of variation to an imperfect estimator approaches better the ideal estimator at a 51 times reduction in compute cost. Building on these results, we study the error rate of detecting improvements, on five different deep-learning tasks/architectures. This study leads us to propose recommendations for performance comparisons.
Deep LDA-Pruned Nets for Efficient Facial Gender Classification
Qing Tian
James J. Clark
Many real-time tasks, such as human-computer interac-tion, require fast and efficient facial gender classification. Although deep CNN nets… (voir plus) have been very effective for a mul-titude of classification tasks, their high space and time de-mands make them impractical for personal computers and mobile devices without a powerful GPU. In this paper, we develop a 16-layer, yet lightweight, neural network which boosts efficiency while maintaining high accuracy. Our net is pruned from the VGG-16 model [35] starting from the last convolutional (conv) layer where we find neuron activations are highly uncorrelated given the gender. Through Fisher’s Linear Discriminant Analysis (LDA) [8], we show that this high decorrelation makes it safe to discard directly last conv layer neurons with high within-class variance and low between-class variance. Combined with either Support Vector Machines (SVM) or Bayesian classification, the reduced CNNs are capable of achieving comparable (or even higher) accuracies on the LFW and CelebA datasets than the original net with fully connected layers. On LFW, only four Conv5 3 neurons are able to maintain a comparably high recognition accuracy, which results in a reduction of total network size by a factor of 70X with a 11 fold speedup. Comparisons with a state-of-the-art pruning method [12] (as well as two smaller nets [20, 24]) in terms of accuracy loss and convolutional layers pruning rate are also provided.
Deep discriminant analysis for task-dependent compact network search
Qing Tian
James J. Clark
Most of today's popular deep architectures are hand-engineered for general purpose applications. However, this design procedure usually lead… (voir plus)s to massive redundant, useless, or even harmful features for specific tasks. Such unnecessarily high complexities render deep nets impractical for many real-world applications, especially those without powerful GPU support. In this paper, we attempt to derive task-dependent compact models from a deep discriminant analysis perspective. We propose an iterative and proactive approach for classification tasks which alternates between (1) a pushing step, with an objective to simultaneously maximize class separation, penalize co-variances, and push deep discriminants into alignment with a compact set of neurons, and (2) a pruning step, which discards less useful or even interfering neurons. Deconvolution is adopted to reverse `unimportant' filters' effects and recover useful contributing sources. A simple network growing strategy based on the basic Inception module is proposed for challenging tasks requiring larger capacity than what the base net can offer. Experiments on the MNIST, CIFAR10, and ImageNet datasets demonstrate our approach's efficacy. On ImageNet, by pushing and pruning our grown Inception-88 model, we achieve better-performing models than smaller deep Inception nets grown, residual nets, and famous compact nets at similar sizes. We also show that our grown deep Inception nets (without hard-coded dimension alignment) can beat residual nets of similar complexities.
BIAS: Transparent reporting of biomedical image analysis challenges
Lena Maier-Hein
Annika Reinke
Michal Kozubek
Anne L. Martel
Matthias Eisenmann
Allan Hanbury
Pierre Jannin
Henning Müller
Sinan Onogur
Julio Saez-Rodriguez
Bram van Ginneken
Annette Kopp-Schneider
Bennett Landman
Medical Imaging with Deep Learning: MIDL 2020 - Short Paper Track
Ismail Ben Ayed
Marleen de Bruijne
Maxime Descoteaux
This compendium gathers all the accepted extended abstracts from the Third International Conference on Medical Imaging with Deep Learning (M… (voir plus)IDL 2020), held in Montreal, Canada, 6-9 July 2020. Note that only accepted extended abstracts are listed here, the Proceedings of the MIDL 2020 Full Paper Track are published in the Proceedings of Machine Learning Research (PMLR).
Evaluating White Matter Lesion Segmentations with Refined Sørensen-Dice Analysis
Aaron Carass
Snehashis Roy
Adrian Gherman
Jacob C. Reinhold
Andrew Jesson
Oskar Maier
Heinz Handels
Mohsen Ghafoorian
Bram Platel
Ariel Birenbaum
Hayit Greenspan
Dzung L. Pham
Ciprian M. Crainiceanu
Peter A. Calabresi
Jerry L. Prince
William R. Gray Roncal
Russell T. Shinohara
Ipek Oguz
Uncertainty Evaluation Metric for Brain Tumour Segmentation
Raghav Mehta
Angelos Filos
Yarin Gal
In this paper, we develop a metric designed to assess and rank uncertainty measures for the task of brain tumour sub-tissue segmentation in … (voir plus)the BraTS 2019 sub-challenge on uncertainty quantification. The metric is designed to: (1) reward uncertainty measures where high confidence is assigned to correct assertions, and where incorrect assertions are assigned low confidence and (2) penalize measures that have higher percentages of under-confident correct assertions. Here, the workings of the components of the metric are explored based on a number of popular uncertainty measures evaluated on the BraTS 2019 dataset.
CNN Detection of New and Enlarging Multiple Sclerosis Lesions from Longitudinal Mri Using Subtraction Images
Nazanin Mohammadi Sepahvand
Douglas Arnold
Accurate detection and segmentation of new lesional activity in longitudinal Magnetic Resonance Images (MRIs) of patients with Multiple Scle… (voir plus)rosis (MS) is important for monitoring disease activity, as well as for assessing treatment effects. In this work, we present the first deep learning framework to automatically detect and segment new and enlarging (NE) T2w lesions from longitudinal brain MRIs acquired from relapsing-remitting MS (RRMS) patients. The proposed framework is an adapted 3D U-Net [1] which includes as inputs the reference multi-modal MRI and T2-weighted lesion maps, as well an attention mechanism based on the subtraction MRI (between the two timepoints) which serves to assist the network in learning to differentiate between real anatomical change and artifactual change, while constraining the search space for small lesions. Experiments on a large, proprietary, multi -center, multi-modal, clinical trial dataset consisting of 1677 multi-modal scans illustrate that network achieves high overall detection accuracy (detection AUC=.95), outperforming (1) a U-Net without an attention mechanism (de-tection AUC=.93), (2) a framework based on subtracting independent T2-weighted segmentations (detection AUC=.57), and (3) DeepMedic (detection AUC=.84) [2], particularly for small lesions. In addition, the method was able to accurately classify patients as active/inactive with (sensitivities of. 69 and specificities of. 97).
Propagating Uncertainty Across Cascaded Medical Imaging Tasks for Improved Deep Learning Inference
Raghav Mehta
Thomas Christinck
Tanya Nair
Aurélie Bussy
Paul Lemaitre
Swapna Premasiri
Manuela Costantino
Mallar Chakravarty
Douglas Arnold
Yarin Gal
Although deep networks have been shown to perform very well on a variety of medical imaging tasks, inference in the presence of pathology pr… (voir plus)esents several challenges to common models. These challenges impede the integration of deep learning models into real clinical workflows, where the customary process of cascading deterministic outputs from a sequence of image-based inference steps (e.g. registration, segmentation) generally leads to an accumulation of errors that impacts the accuracy of downstream inference tasks. In this paper, we propose that by embedding uncertainty estimates across cascaded inference tasks, performance on the downstream inference tasks should be improved. We demonstrate the effectiveness of the proposed approach in three different clinical contexts: (i) We demonstrate that by propagating T2 weighted lesion segmentation results and their associated uncertainties, subsequent T2 lesion detection performance is improved when evaluated on a proprietary large-scale, multi-site, clinical trial dataset acquired from patients with Multiple Sclerosis. (ii) We show an improvement in brain tumour segmentation performance when the uncertainty map associated with a synthesised missing MR volume is provided as an additional input to a follow-up brain tumour segmentation network, when evaluated on the publicly available BraTS-2018 dataset. (iii) We show that by propagating uncertainties from a voxel-level hippocampus segmentation task, the subsequent regression of the Alzheimer’s disease clinical score is improved.
Saliency Based Deep Neural Network for Automatic Detection of Gadolinium-Enhancing Multiple Sclerosis Lesions in Brain MRI
Joshua D. Durso-Finley
Douglas Arnold
Improving Pathological Structure Segmentation via Transfer Learning Across Diseases
Barleen Kaur
Paul Lemaitre
Raghav Mehta
Nazanin Mohammadi Sepahvand
Douglas Arnold
Prediction of Disease Progression in Multiple Sclerosis Patients using Deep Learning Analysis of MRI Data
Adrian Tousignant
Paul Lemaitre
Douglas Arnold
We present the first automatic end-to-end deep learning framework for the prediction of future patient disability progression (one year from… (voir plus) baseline) based on multi-modal brain Magnetic Resonance Images (MRI) of patients with Multiple Sclerosis (MS). The model uses parallel convolutional pathways, an idea introduced by the popular Inception net (Szegedy et al., 2015) and is trained and tested on two large proprietary, multi-scanner, multi-center, clinical trial datasets of patients with Relapsing-Remitting Multiple Sclerosis (RRMS). Experiments on 465 patients on the placebo arms of the trials indicate that the model can accurately predict future disease progression, measured by a sustained increase in the extended disability status scale (EDSS) score over time. Using only the multi-modal MRI provided at baseline, the model achieves an AUC of 0.66±0.055. However, when supplemental lesion label masks are provided as inputs as well, the AUC increases to 0.701± 0.027. Furthermore, we demonstrate that uncertainty estimates based on Monte Carlo dropout sample variance correlate with errors made by the model. Clinicians provided with the predictions computed by the model can therefore use the associated uncertainty estimates to assess which scans require further examination.