Portrait of Eugene Vorontsov is unavailable

Eugene Vorontsov

Alumni

Publications

The Liver Tumor Segmentation Benchmark (LiTS)
Patrick Bilic
Patrick Christ
Hongwei Bran Li
Grzegorz Chlebus
Hao Chen
Qi Dou
Chi-Wing Fu
Xu Han
Gabriel Efrain Humpire Mamani
Pheng Ann Heng
Jürgen Hesser
Samuel Kadoury
Julian Walter Holch
Tomasz Konopczynski
Miao Yue
Chunming Li
X. Li
Jana Lipková
John Lowengrub … (see 99 more)
Michal Marianne Amitai
Hans Meine
J. Moltz
Marie Piraud
Ivan Ezhov
Xiaojuan Qi
Fernando Navarro
Jin Qi
Florian Kofler
Markus Rempfler
Johannes C. Paetzold
Suprosanna Shit
Andrea Schenk
Xiaobin Hu
Anjany Sekuboyina
Ping Zhou
Christian Hülsemeyer
Marcel Beetz
Jan Kirschke
Florian Ettlinger
Felix Gruen
Benedikt Wiestler
Zhiheng Zhang
Georgios Kaissis
Fabian Lohöfer
Rickmer Braren
J. Holch
Michela Antonelli
Felix Hofmann
Woong Bae
Wieland Sommer
Míriam Bellver
Volker Heinemann
Lei Bi
Colin Jacobs
G. Mamani
Bram van Ginneken
Erik B. Dam
Gabriel Chartrand
An Tang
Bogdan Georgescu
Avi Ben-Cohen
Xavier Giró-i-Nieto
Eyal Klang
M. Amitai
E. Konen
Hayit Greenspan
Johan Moreau
Jan Hendrik Moltz
Alexandre Hostettler
Christian Igel
Luc Soler
Fabian Isensee
Refael Vivanti
Paul Jäger
Adi Szeskin
Fucang Jia
Naama Lev-Cohain
Krishna Chaitanya Kaluva
Jacob Sosna
Mahendra Khened
Leo Joskowicz
Ildoo Kim
Bjoern Menze
Jae-Hun Kim
Zengming Shen
Sungwoong Kim
Simon Kohl
Avinash Kori
Ganapathy Krishnamurthi
Fan Li
Hongchao Li
Junbo Li
Xiaomeng Li
Jun Ma
Klaus Maier-Hein
Kevis-Kokitsi Maninis
Dorit Merhof
Akshay Pai
Mathias Perslev
Jens Petersen
Jordi Pont-Tuset
Oliver Rippel
Ignacio Sarasua
Jordi Torres
Christian Wachinger
Chunliang Wang
Leon Weninger
Jianrong Wu
Daguang Xu
Xiaoping Yang
Simon Chun-Ho Yu
Yading Yuan
Liping Zhang
Jorge Cardoso
Spyridon Bakas
Towards Non-saturating Recurrent Units for Modelling Long-term Dependencies
Modelling long-term dependencies is a challenge for recurrent neural networks. This is primarily due to the fact that gradients vanish durin… (see more)g training, as the sequence length increases. Gradients can be attenuated by transition operators and are attenuated or dropped by activation functions. Canonical architectures like LSTM alleviate this issue by skipping information through a memory mechanism. We propose a new recurrent architecture (Non-saturating Recurrent Unit; NRU) that relies on a memory mechanism but forgoes both saturating activation functions and saturating gates, in order to further alleviate vanishing gradients. In a series of synthetic and real world tasks, we demonstrate that the proposed model is the only model that performs among the top 2 models across all tasks with and without long-term dependencies, when compared against a range of other architectures.
Deep Learning for Automated Segmentation of Liver Lesions at CT in Patients with Colorectal Cancer Liver Metastases.
Milena Cerny
Philippe Régnier
Lisa Di Jorio
Réal Lapointe
Franck Vandenbroucke-Menu
Simon Turcotte
Samuel Kadoury
An Tang
Purpose To evaluate the performance, agreement, and efficiency of a fully convolutional network (FCN) for liver lesion detection and segment… (see more)ation at CT examinations in patients with colorectal liver metastases (CLMs). Materials and Methods This retrospective study evaluated an automated method using an FCN that was trained, validated, and tested with 115, 15, and 26 contrast material-enhanced CT examinations containing 261, 22, and 105 lesions, respectively. Manual detection and segmentation by a radiologist was the reference standard. Performance of fully automated and user-corrected segmentations was compared with that of manual segmentations. The interuser agreement and interaction time of manual and user-corrected segmentations were assessed. Analyses included sensitivity and positive predictive value of detection, segmentation accuracy, Cohen κ, Bland-Altman analyses, and analysis of variance. Results In the test cohort, for lesion size smaller than 10 mm (n = 30), 10-20 mm (n = 35), and larger than 20 mm (n = 40), the detection sensitivity of the automated method was 10%, 71%, and 85%; positive predictive value was 25%, 83%, and 94%; Dice similarity coefficient was 0.14, 0.53, and 0.68; maximum symmetric surface distance was 5.2, 6.0, and 10.4 mm; and average symmetric surface distance was 2.7, 1.7, and 2.8 mm, respectively. For manual and user-corrected segmentation, κ values were 0.42 (95% confidence interval: 0.24, 0.63) and 0.52 (95% confidence interval: 0.36, 0.72); normalized interreader agreement for lesion volume was -0.10 ± 0.07 (95% confidence interval) and -0.10 ± 0.08; and mean interaction time was 7.7 minutes ± 2.4 (standard deviation) and 4.8 minutes ± 2.1 (P .001), respectively. Conclusion Automated detection and segmentation of CLM by using deep learning with convolutional neural networks, when manually corrected, improved efficiency but did not substantially change agreement on volumetric measurements.© RSNA, 2019Supplemental material is available for this article.