Portrait de Adriana Romero Soriano

Adriana Romero Soriano

Membre industriel principal
Chaire en IA Canada-CIFAR
Professeure adjointe, McGill University, École d'informatique
Chercheuse scientifique, Meta AI Research (FAIR)
Sujets de recherche
Apprentissage profond
Modèles génératifs
Vision par ordinateur

Biographie

Adriana Romero-Soriano est chercheuse à Meta (FAIR, Fundamental AI Research), professeure adjointe à l'Université McGill, membre industriel principal de Mila – Institut québécois d’intelligence artificielle et titulaire d'une chaire en IA Canada-CIFAR. Ses recherches se situent à l'intersection des modèles génératifs, de la vision par ordinateur et de l'IA responsable. Ses travaux les plus récents portent sur l'amélioration de la qualité, de la contrôlabilité, de la cohérence et de la diversité de représentation des systèmes de création de contenu visuel. Elle a obtenu son doctorat à l'Université de Barcelone, où elle a travaillé avec Carlo Gatta, et a été chercheuse postdoctorale pendant deux ans à Mila, où elle a travaillé avec le professeur Yoshua Bengio.

Étudiants actuels

Collaborateur·rice de recherche - UdeM
Doctorat - McGill
Superviseur⋅e principal⋅e :
Doctorat - McGill
Superviseur⋅e principal⋅e :

Publications

Automated segmentation of cortical layers in BigBrain reveals divergent cortical and laminar thickness gradients in sensory and motor cortices.
Konrad Wagstyl
Stéphanie Larocque
Guillem Cucurull
Claude Lepage
Joseph Paul Cohen
Sebastian Bludau
Nicola Palomero-Gallagher
L. Lewis
Thomas Funck
Hannah Spitzer
Timo Dicksheid
Paul C Fletcher
Karl Zilles
Katrin Amunts
Alan C. Evans
Abstract Large-scale in vivo neuroimaging datasets offer new possibilities for reliable, well-powered measures of interregional structural d… (voir plus)ifferences and biomarkers of pathological changes in a wide variety of neurological and psychiatric diseases. However, so far studies have been structurally and functionally imprecise, being unable to relate pathological changes to specific cortical layers or neurobiological processes. We developed artificial neural networks to segment cortical and laminar surfaces in the BigBrain, a 3D histological model of the human brain. We sought to test whether previously-reported thickness gradients, as measured by MRI, in sensory and motor processing cortices, were present in a histological atlas of cortical thickness, and which cortical layers were contributing to these gradients. Identifying common gradients of cortical organisation enables us to meaningfully relate microstructural, macrostructural and functional cortical parameters. Analysis of thickness gradients across sensory cortices, using our fully segmented six-layered model, was consistent with MRI findings, showing increasing thickness moving up the processing hierarchy. In contrast, fronto-motor cortices showed the opposite pattern with changes in thickness of layers III, V and VI being the primary drivers of these gradients. As well as identifying key differences between sensory and motor gradients, our findings show how the use of this laminar atlas offers insights that will be key to linking single-neuron morphological changes, mesoscale cortical layers and macroscale cortical thickness.
BigBrain 3D atlas of cortical layers: Cortical and laminar thickness gradients diverge in sensory and motor cortices
Konrad Wagstyl
Stéphanie Larocque
Guillem Cucurull
Claude Lepage
Joseph Paul Cohen
Sebastian Bludau
Nicola Palomero-Gallagher
L. Lewis
Thomas Funck
Hannah Spitzer
Timo Dicksheid
Paul C Fletcher
Karl Zilles
Katrin Amunts
Alan C. Evans
Histological atlases of the cerebral cortex, such as those made famous by Brodmann and von Economo, are invaluable for understanding human b… (voir plus)rain microstructure and its relationship with functional organization in the brain. However, these existing atlases are limited to small numbers of manually annotated samples from a single cerebral hemisphere, measured from 2D histological sections. We present the first whole-brain quantitative 3D laminar atlas of the human cerebral cortex. This atlas was derived from a 3D histological model of the human brain at 20 micron isotropic resolution (BigBrain), using a convolutional neural network to segment, automatically, the cortical layers in both hemispheres. Our approach overcomes many of the historical challenges with measurement of histological thickness in 2D and the resultant laminar atlas provides an unprecedented level of precision and detail. We utilized this BigBrain cortical atlas to test whether previously reported thickness gradients, as measured by MRI in sensory and motor processing cortices, were present in a histological atlas of cortical thickness, and which cortical layers were contributing to these gradients. Cortical thickness increased across sensory processing hierarchies, primarily driven by layers III, V and VI. In contrast, fronto-motor cortices showed the opposite pattern, with decreases in total and pyramidal layer thickness. These findings illustrate how this laminar atlas will provide a link between single-neuron morphology, mesoscale cortical layering, macroscopic cortical thickness and, ultimately, functional neuroanatomy.
On the Iterative Refinement of Densely Connected Representation Levels for Semantic Segmentation
Arantxa Casanova
Guillem Cucurull
Michal Drozdzal
State-of-the-art semantic segmentation approaches increase the receptive field of their models by using either a downsampling path composed … (voir plus)of poolings/strided convolutions or successive dilated convolutions. However, it is not clear which operation leads to best results. In this paper, we systematically study the differences introduced by distinct receptive field enlargement methods and their impact on the performance of a novel architecture, called Fully Convolutional DenseResNet (FC-DRN). FC-DRN has a densely connected backbone composed of residual networks. Following standard image segmentation architectures, receptive field enlargement operations that change the representation level are interleaved among residual networks. This allows the model to exploit the benefits of both residual and dense connectivity patterns, namely: gradient flow, iterative refinement of representations, multi-scale feature combination and deep supervision. In order to highlight the potential of our model, we test it on the challenging CamVid urban scene understanding benchmark and make the following observations: 1) downsampling operations outperform dilations when the model is trained from scratch, 2) dilations are useful during the finetuning step of the model, 3) coarser representations require less refinement steps, and 4) ResNets (by model construction) are good regularizers, since they can reduce the model capacity when needed. Finally, we compare our architecture to alternative methods and report state-of-the-art result on the Camvid dataset, with at least twice fewer parameters.
BigBrain: 1D convolutional neural networks for automated sementation of cortical layers
Konrad Wagstyl
Claude Lepage
Karl Zilles
Sebastian Bludau
G. Cucurul
Alan C. Evans
Paul C Fletcher
Joseph Paul Cohen
Stéphanie Larocque
Thomas Funck
Katrin Amunts
Convolutional neural networks for mesh-based parcellation of the cerebral cortex
Guillem Cucurull
Konrad Wagstyl
Arantxa Casanova
Petar Veličković
Estrid Jakobsen
Michal Drozdzal
Alan C. Evans
In order to understand the organization of the cerebral cortex, it is necessary to create a map or parcellation of cortical areas. Reconstru… (voir plus)ctions of the cortical surface created from structural MRI scans, are frequently used in neuroimaging as a common coordinate space for representing multimodal neuroimaging data. These meshes are used to investigate healthy brain organization as well as abnormalities in neurological and psychiatric conditions. We frame cerebral cortex parcellation as a mesh segmentation task, and address it by taking advantage of recent advances in generalizing convolutions to the graph domain. In particular, we propose to assess graph convolutional networks and graph attention networks, which, in contrast to previous mesh parcellation models, exploit the underlying structure of the data to make predictions. We show experimentally on the Human Connectome Project dataset that the proposed graph convolutional models outperform current state-of-the-art and baselines, highlighting the potential and applicability of these methods to tackle neuroimaging challenges, paving the road towards a better characterization of brain diseases.
Graph Attention Networks
Petar Veličković
Guillem Cucurull
Arantxa Casanova
Pietro Lio
Graph Attention Networks
Petar Veličković
Guillem Cucurull
Arantxa Casanova
Pietro Lio
Graph Attention Networks
Petar Veličković
Guillem Cucurull
Arantxa Casanova
Pietro Lio
A Benchmark for Endoluminal Scene Segmentation of Colonoscopy Images
David Vazquez
Jorge Bernal
F. Javier Sánchez
Gloria Fernández-Esparrach
Antonio M. López
Michal Drozdzal
Colorectal cancer (CRC) is the third cause of cancer death worldwide. Currently, the standard approach to reduce CRC-related mortality is to… (voir plus) perform regular screening in search for polyps and colonoscopy is the screening tool of choice. The main limitations of this screening procedure are polyp miss rate and the inability to perform visual assessment of polyp malignancy. These drawbacks can be reduced by designing decision support systems (DSS) aiming to help clinicians in the different stages of the procedure by providing endoluminal scene segmentation. Thus, in this paper, we introduce an extended benchmark of colonoscopy image segmentation, with the hope of establishing a new strong benchmark for colonoscopy image analysis research. The proposed dataset consists of 4 relevant classes to inspect the endoluminal scene, targeting different clinical needs. Together with the dataset and taking advantage of advances in semantic segmentation literature, we provide new baselines by training standard fully convolutional networks (FCNs). We perform a comparative study to show that FCNs significantly outperform, without any further postprocessing, prior results in endoluminal scene segmentation, especially with respect to polyp segmentation and localization.
Diet Networks: Thin Parameters for Fat Genomics
pierre luc carrier
Akram Erraqabi
Tristan Sylvain
Alex Auvolat
Etienne Dejoie
Marie-Pierre Dubé
Learning tasks such as those involving genomic data often poses a serious challenge: the number of input features can be orders of magnitude… (voir plus) larger than the number of training examples, making it difficult to avoid overfitting, even when using the known regularization techniques. We focus here on tasks in which the input is a description of the genetic variation specific to a patient, the single nucleotide polymorphisms (SNPs), yielding millions of ternary inputs. Improving the ability of deep learning to handle such datasets could have an important impact in medical research, more specifically in precision medicine, where high-dimensional data regarding a particular patient is used to make predictions of interest. Even though the amount of data for such tasks is increasing, this mismatch between the number of examples and the number of inputs remains a concern. Naive implementations of classifier neural networks involve a huge number of free parameters in their first layer (number of input features times number of hidden units): each input feature is associated with as many parameters as there are hidden units. We propose a novel neural network parametrization which considerably reduces the number of free parameters. It is based on the idea that we can first learn or provide a distributed representation for each input feature (e.g. for each position in the genome where variations are observed in data), and then learn (with another neural network called the parameter prediction network) how to map a feature's distributed representation (based on the feature's identity not its value) to the vector of parameters specific to that feature in the classifier neural network (the weights which link the value of the feature to each of the hidden units). This approach views the problem of producing the parameters associated with each feature as a multi-task learning problem. We show experimentally on a population stratification task of interest to medical studies that the proposed approach can significantly reduce both the number of parameters and the error rate of the classifier.
Diet Networks: Thin Parameters for Fat Genomics
pierre luc carrier
Akram Erraqabi
Tristan Sylvain
Alex Auvolat
Etienne Dejoie
Marie-Pierre Dubé
Learning tasks such as those involving genomic data often poses a serious challenge: the number of input features can be orders of magnitude… (voir plus) larger than the number of training examples, making it difficult to avoid overfitting, even when using the known regularization techniques. We focus here on tasks in which the input is a description of the genetic variation specific to a patient, the single nucleotide polymorphisms (SNPs), yielding millions of ternary inputs. Improving the ability of deep learning to handle such datasets could have an important impact in medical research, more specifically in precision medicine, where high-dimensional data regarding a particular patient is used to make predictions of interest. Even though the amount of data for such tasks is increasing, this mismatch between the number of examples and the number of inputs remains a concern. Naive implementations of classifier neural networks involve a huge number of free parameters in their first layer (number of input features times number of hidden units): each input feature is associated with as many parameters as there are hidden units. We propose a novel neural network parametrization which considerably reduces the number of free parameters. It is based on the idea that we can first learn or provide a distributed representation for each input feature (e.g. for each position in the genome where variations are observed in data), and then learn (with another neural network called the parameter prediction network) how to map a feature's distributed representation (based on the feature's identity not its value) to the vector of parameters specific to that feature in the classifier neural network (the weights which link the value of the feature to each of the hidden units). This approach views the problem of producing the parameters associated with each feature as a multi-task learning problem. We show experimentally on a population stratification task of interest to medical studies that the proposed approach can significantly reduce both the number of parameters and the error rate of the classifier.
Diet Networks: Thin Parameters for Fat Genomic
pierre luc carrier
Akram Erraqabi
Tristan Sylvain
Alex Auvolat
Etienne Dejoie
M. Dubé
Learning tasks such as those involving genomic data often poses a serious challenge: the number of input features can be orders of magnitude… (voir plus) larger than the number of training examples, making it difficult to avoid overfitting, even when using the known regularization techniques. We focus here on tasks in which the input is a description of the genetic variation specific to a patient, the single nucleotide polymorphisms (SNPs), yielding millions of ternary inputs. Improving the ability of deep learning to handle such datasets could have an important impact in precision medicine, where high-dimensional data regarding a particular patient is used to make predictions of interest. Even though the amount of data for such tasks is increasing, this mismatch between the number of examples and the number of inputs remains a concern. Naive implementations of classifier neural networks involve a huge number of free parameters in their first layer: each input feature is associated with as many parameters as there are hidden units. We propose a novel neural network parametrization which considerably reduces the number of free parameters. It is based on the idea that we can first learn or provide a distributed representation for each input feature (e.g. for each position in the genome where variations are observed), and then learn (with another neural network called the parameter prediction network) how to map a feature's distributed representation to the vector of parameters specific to that feature in the classifier neural network (the weights which link the value of the feature to each of the hidden units). We show experimentally on a population stratification task of interest to medical studies that the proposed approach can significantly reduce both the number of parameters and the error rate of the classifier.