Brain tumor segmentation with Deep Neural Networks
Mohammad Havaei
Axel Davy
David Warde-Farley
Antoine Biard
Pierre-Marc Jodoin
Calibrating Energy-based Generative Adversarial Networks
Zihang Dai
Amjad Almahairi
Philip Bachman
Eduard Hovy
In this paper, we propose to equip Generative Adversarial Networks with the ability to produce direct energy estimates for samples. Specific… (see more)ally, we propose a flexible adversarial training framework, and prove this framework not only ensures the generator converges to the true data distribution, but also enables the discriminator to retain the density information at the global optimal. We derive the analytic form of the induced solution, and analyze the properties. In order to make the proposed framework trainable in practice, we introduce two effective approximation techniques. Empirically, the experiment results closely match our theoretical analysis, verifying the discriminator is able to recover the energy of data distribution.
Computer Assisted and Robotic Endoscopy and Clinical Image-Based Procedures
M. Cardoso
Xiongbiao Luo
Stefan Wesarg
Tobias Reichl
M. Ballester
Jonathan Mcleod
Klaus Dr. Drechsler
T. Peters
Marius Erdt
Kensaku Mori
M. Linguraru
Andreas Uhl
Cristina Oyarzun Laura
R. Shekhar
Computer Assisted and Robotic Endoscopy and Clinical Image-Based Procedures
M. Jorge Cardoso
Xiongbiao Luo
Stefan Wesarg
Tobias Reichl
M. Ballester
Jonathan Mcleod
Klaus Dr. Drechsler
T. Peters
Marius Erdt
Kensaku Mori
M. Linguraru
Andreas Uhl
Cristina Oyarzun Laura
R. Shekhar
Computer-Assisted Conceptual Analysis of Textual Data as Applied to Philosophical Corpuses
Jean Guy Meunier
L. Chartrand
Mathieu Valette
Marie-noëlle Bayle
Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support
M. Jorge Cardoso
G. Carneiro
T. Syeda-Mahmood
J. Tavares
Mehdi Moradi
Andrew P. Bradley
Hayit Greenspan
J. Papa
Anant. Madabhushi
Jacinto C Nascimento
Jaime S. Cardoso
Vasileios Belagiannis
Zhi Lu
Faculdade Engenharia
Diet Networks: Thin Parameters for Fat Genomics
pierre luc carrier
Akram Erraqabi
Tristan Sylvain
Alex Auvolat
Etienne Dejoie
Marie-Pierre Dubé
Learning tasks such as those involving genomic data often poses a serious challenge: the number of input features can be orders of magnitude… (see more) larger than the number of training examples, making it difficult to avoid overfitting, even when using the known regularization techniques. We focus here on tasks in which the input is a description of the genetic variation specific to a patient, the single nucleotide polymorphisms (SNPs), yielding millions of ternary inputs. Improving the ability of deep learning to handle such datasets could have an important impact in medical research, more specifically in precision medicine, where high-dimensional data regarding a particular patient is used to make predictions of interest. Even though the amount of data for such tasks is increasing, this mismatch between the number of examples and the number of inputs remains a concern. Naive implementations of classifier neural networks involve a huge number of free parameters in their first layer (number of input features times number of hidden units): each input feature is associated with as many parameters as there are hidden units. We propose a novel neural network parametrization which considerably reduces the number of free parameters. It is based on the idea that we can first learn or provide a distributed representation for each input feature (e.g. for each position in the genome where variations are observed in data), and then learn (with another neural network called the parameter prediction network) how to map a feature's distributed representation (based on the feature's identity not its value) to the vector of parameters specific to that feature in the classifier neural network (the weights which link the value of the feature to each of the hidden units). This approach views the problem of producing the parameters associated with each feature as a multi-task learning problem. We show experimentally on a population stratification task of interest to medical studies that the proposed approach can significantly reduce both the number of parameters and the error rate of the classifier.
Diet Networks: Thin Parameters for Fat Genomics
pierre luc carrier
Akram Erraqabi
Tristan Sylvain
Alex Auvolat
Etienne Dejoie
Marie-Pierre Dubé
Learning tasks such as those involving genomic data often poses a serious challenge: the number of input features can be orders of magnitude… (see more) larger than the number of training examples, making it difficult to avoid overfitting, even when using the known regularization techniques. We focus here on tasks in which the input is a description of the genetic variation specific to a patient, the single nucleotide polymorphisms (SNPs), yielding millions of ternary inputs. Improving the ability of deep learning to handle such datasets could have an important impact in medical research, more specifically in precision medicine, where high-dimensional data regarding a particular patient is used to make predictions of interest. Even though the amount of data for such tasks is increasing, this mismatch between the number of examples and the number of inputs remains a concern. Naive implementations of classifier neural networks involve a huge number of free parameters in their first layer (number of input features times number of hidden units): each input feature is associated with as many parameters as there are hidden units. We propose a novel neural network parametrization which considerably reduces the number of free parameters. It is based on the idea that we can first learn or provide a distributed representation for each input feature (e.g. for each position in the genome where variations are observed in data), and then learn (with another neural network called the parameter prediction network) how to map a feature's distributed representation (based on the feature's identity not its value) to the vector of parameters specific to that feature in the classifier neural network (the weights which link the value of the feature to each of the hidden units). This approach views the problem of producing the parameters associated with each feature as a multi-task learning problem. We show experimentally on a population stratification task of interest to medical studies that the proposed approach can significantly reduce both the number of parameters and the error rate of the classifier.
Facilitating Multimodality in Normalizing Flows
The true Bayesian posterior of a model such as a neural network may be highly multimodal. In principle, normalizing flows can represent such… (see more) a distribution via compositions of invertible transformations of random noise. In practice, however, existing normalizing flows may fail to capture most of the modes of a distribution. We argue that the conditionally affine structure of the transformations used in [Dinh et al., 2014, 2016, Kingma et al., 2016] is inefficient, and show that flows which instead use (conditional) invertible non-linear transformations naturally enable multimodality in their output distributions. With just two layers of our proposed deep sigmoidal flow, we are able to model complicated 2d energy functions with much higher fidelity than six layers of deep affine flows.
Fetal, Infant and Ophthalmic Medical Image Analysis
M. Cardoso
Andrew Melbourne
Hrvoje Bogunovic
Pim Moeskops
Xinjian Chen
Ernst Schwartz
M. Garvin
E. Robinson
E. Trucco
Michael Ebner
Yanwu Xu
Antonios Makropoulos
Adrien Desjardin
Tom Kamiel Magda Vercauteren
Fetal, Infant and Ophthalmic Medical Image Analysis
M. Jorge Cardoso
Andrew Melbourne
Hrvoje Bogunovic
Pim Moeskops
Xinjian Chen
Ernst Schwartz
M. Garvin
E. Robinson
E. Trucco
Michael Ebner
Yanwu Xu
Antonios Makropoulos
Adrien Desjardin
Tom Kamiel Magda Vercauteren
GibbsNet: Iterative Adversarial Inference for Deep Graphical Models
Alex Lamb
Yaroslav Ganin
Joseph Paul Cohen
Directed latent variable models that formulate the joint distribution as …