Portrait of Brennan Nichyporuk is unavailable

Brennan Nichyporuk

Research Scientist, Innovation, Development and Technologies

Publications

Information Gain Sampling for Active Learning in Medical Image Classification
Raghav Mehta
Changjian Shui
Brennan Nichyporuk
GP.2 Deep learning prediction of response to disease modifying therapy in primary progressive multiple sclerosis
JR Falet
Joshua D. Durso-Finley
Brennan Nichyporuk
Julien Schroeter
Francesca Bovis
M Sormani
D Precup
DL Arnold
Background: Only one disease modifying therapy (DMT), ocrelizumab, was found to slow disability progression in primary progressive multiple … (see more)sclerosis (PPMS). Modeling the conditional average treatment effect (CATE) using deep learning could identify individuals more responsive to DMTs, allowing for predictive enrichment to increase the power of future clinical trials. Methods: Baseline clinical and MRI data were acquired as part of three placebo-controlled randomized clinical trials: ORATORIO (ocrelizumab), OLYMPUS (rituximab) and ARPEGGIO (laquinimod). Data from ORATORIO and OLYMPUS was separated into a training (70%) and testing (30%) set, while ARPEGGIO served as additional validation. An ensemble of multitask multilayer perceptrons was trained to predict the rate of disability progression on both treatment and placebo to estimate CATE. Results: The model could separate individuals based on their predicted treatment effect. The top 25% of individuals predicted to respond most have a larger effect size (HR 0.442, p=0.0497) than the entire group (HR 0.787, p=0.292). The model could also identify responders to laquinimod. A simulated study where the 50% most responsive individuals are randomized would require 6-times less participants to detect a significant effect. Conclusions: Individuals with PPMS who respond favourably to DMTs can be identified using deep learning based on their baseline clinical and imaging characteristics.
Metrics Reloaded - A new recommendation framework for biomedical image analysis validation
Annika Reinke
Lena Maier-Hein
Evangelia Christodoulou
Ben Glocker
Patrick Scholz
Fabian Isensee
Jens Kleesiek
Michal Kozubek
Mauricio Reyes
Michael Alexander Riegler
Manuel Wiesenfarth
Michael Baumgartner
Matthias Eisenmann
Doreen Heckmann-Notzel
Ali Emre Kavur
Tim Radsch
Minu D. Tizabi
Laura Acion
Michela Antonelli
Spyridon Bakas
Peter Bankhead
Arriel Benis
M. Jorge Cardoso
Veronika Cheplygina
Beth A Cimini
Gary S. Collins
Keyvan Farahani
Bram van Ginneken
Fred A Hamprecht
Daniel A. Hashimoto
Michael M. Hoffman
Merel Huisman
Pierre Jannin
Charles Kahn
Alexandros Karargyris
Alan Karthikesalingam
Hannes Kenngott
Annette Kopp-Schneider
Anna Kreshuk
Tahsin Kurc
Bennett Landman
Geert Litjens
Amin Madani
Klaus Maier-Hein
Anne Martel
Peter Mattson
Erik Meijering
Bjoern Menze
David Moher
Karel G.M. Moons
Henning Müller
Brennan Nichyporuk
Felix Nickel
Jens Petersen
Nasir Rajpoot
Nicola Rieke
Julio Saez-Rodriguez
Clara I. Sánchez
Shravya Shetty
Maarten van Smeden
Carole H. Sudre
Ronald M. Summers
Abdel A. Taha
Sotirios A. Tsaftaris
Ben Van Calster
Gael Varoquaux
Paul F Jaeger
Meaningful performance assessment of biomedical image analysis algorithms depends on objective and appropriate performance metrics. There ar… (see more)e major shortcomings in the current state of the art. Yet, so far limited attention has been paid to practical pitfalls associated when using particular metrics for image analysis tasks. Therefore, a number of international initiatives have collaborated to offer researchers with guidance and tools for selecting performance metrics in a problem-aware manner. In our proposed framework, the characteristics of the given biomedical problem are first captured in a problem fingerprint, which identifies properties related to domain interests, the target structure(s), the input datasets, and algorithm output. A problem category-specific mapping is applied in the second step to match fingerprints to metrics that reflect domain requirements. Based on input from experts from more than 60 institutions worldwide, we believe our metric recommendation framework to be useful to the MIDL community and to enhance the quality of biomedical image analysis algorithm validation.
Deep Learning Prediction of Response to Disease Modifying Therapy in Primary Progressive Multiple Sclerosis (P1-1.Virtual)
Jean-Pierre R. Falet
Joshua D. Durso-Finley
Brennan Nichyporuk
Julien Schroeter
Francesca Bovis
Maria-Pia Sormani
Douglas Arnold
Personalized Prediction of Future Lesion Activity and Treatment Effect in Multiple Sclerosis from Baseline MRI
Joshua D. Durso-Finley
Jean-Pierre R. Falet
Brennan Nichyporuk
Douglas Arnold
Precision medicine for chronic diseases such as multiple sclerosis (MS) involves choosing a treatment which best balances efficacy and side … (see more)effects/preferences for individual patients. Making this choice as early as possible is important, as delays in finding an effective therapy can lead to irreversible disability accrual. To this end, we present the first deep neural network model for individualized treatment decisions from baseline magnetic resonance imaging (MRI) (with clinical information if available) for MS patients which (a) predicts future new and enlarging T2 weighted (NE-T2) lesion counts on follow-up MRI on multiple treatments and (b) estimates the conditional average treatment effect (CATE), as defined by the predicted future suppression of NE-T2 lesions, between different treatment options relative to placebo. Our model is validated on a proprietary federated dataset of 1817 multi-sequence MRIs acquired from MS patients during four multi-centre randomized clinical trials. Our framework achieves high average precision in the binarized regression of future NE-T2 lesions on five different treatments, identifies heterogeneous treatment effects, and provides a personalized treatment recommendation that accounts for treatment-associated risk (side effects, patient preference, administration difficulties).
Estimating treatment effect for individuals with progressive multiple sclerosis using deep learning
JR Falet
Joshua D. Durso-Finley
Brennan Nichyporuk
Jan Schroeter
Francesca Bovis
Maria-Pia Sormani
Douglas Arnold
Cohort Bias Adaptation in Aggregated Datasets for Lesion Segmentation
Brennan Nichyporuk
Jillian L. Cardinell
Justin Szeto
Raghav Mehta
Sotirios A. Tsaftaris
Douglas Arnold
HAD-Net: A Hierarchical Adversarial Knowledge Distillation Network for Improved Enhanced Tumour Segmentation Without Post-Contrast Images
Saverio Vadacchino
Raghav Mehta
Nazanin Mohammadi Sepahvand
Brennan Nichyporuk
James J. Clark
Segmentation of enhancing tumours or lesions from MRI is important for detecting new disease activity in many clinical contexts. However, ac… (see more)curate segmentation requires the inclusion of medical images (e.g., T1 post-contrast MRI) acquired after injecting patients with a contrast agent (e.g., Gadolinium), a process no longer thought to be safe. Although a number of modality-agnostic segmentation networks have been developed over the past few years, they have been met with limited success in the context of enhancing pathology segmentation. In this work, we present HAD-Net, a novel offline adversarial knowledge distillation (KD) technique, whereby a pre-trained teacher segmentation network, with access to all MRI sequences, teaches a student network, via hierarchical adversarial training, to better overcome the large domain shift presented when crucial images are absent during inference. In particular, we apply HAD-Net to the challenging task of enhancing tumour segmentation when access to post-contrast imaging is not available. The proposed network is trained and tested on the BraTS 2019 brain tumour segmentation challenge dataset, where it achieves performance improvements in the ranges of 16% - 26% over (a) recent modality-agnostic segmentation methods (U-HeMIS, U-HVED), (b) KD-Net adapted to this problem, (c) the pre-trained student network and (d) a non-hierarchical version of the network (AD-Net), in terms of Dice scores for enhancing tumour (ET). The network also shows improvements in tumour core (TC) Dice scores. Finally, the network outperforms both the baseline student network and AD-Net in terms of uncertainty quantification for enhancing tumour segmentation based on the BraTS 2019 uncertainty challenge metrics. Our code is publicly available at: https://github.com/SaverioVad/HAD_Net
Optimizing Operating Points for High Performance Lesion Detection and Segmentation Using Lesion Size Reweighting
Brennan Nichyporuk
Justin Szeto
Douglas Arnold
There are many clinical contexts which require accurate detection and segmentation of all focal pathologies (e.g. lesions, tumours) in patie… (see more)nt images. In cases where there are a mix of small and large lesions, standard binary cross entropy loss will result in better segmentation of large lesions at the expense of missing small ones. Adjusting the operating point to accurately detect all lesions generally leads to oversegmentation of large lesions. In this work, we propose a novel reweighing strategy to eliminate this performance gap, increasing small pathology detection performance while maintaining segmentation accuracy. We show that our reweighing strategy vastly outperforms competing strategies based on experiments on a large scale, multi-scanner, multi-center dataset of Multiple Sclerosis patient images.
Accounting for Variance in Machine Learning Benchmarks
Xavier Bouthillier
Pierre Delaunay
Mirko Bronzi
Assya Trofimov
Brennan Nichyporuk
Justin Szeto
Naz Sepah
Edward Raff
Kanika Madan
Vikram Voleti
Vincent Michalski
Dmitriy Serdyuk
Gael Varoquaux
Strong empirical evidence that one machine-learning algorithm A outperforms another one B ideally calls for multiple trials optimizing the l… (see more)earning pipeline over sources of variation such as data sampling, data augmentation, parameter initialization, and hyperparameters choices. This is prohibitively expensive, and corners are cut to reach conclusions. We model the whole benchmarking process, revealing that variance due to data sampling, parameter initialization and hyperparameter choice impact markedly the results. We analyze the predominant comparison methods used today in the light of this variance. We show a counter-intuitive result that adding more sources of variation to an imperfect estimator approaches better the ideal estimator at a 51 times reduction in compute cost. Building on these results, we study the error rate of detecting improvements, on five different deep-learning tasks/architectures. This study leads us to propose recommendations for performance comparisons.