Portrait de Julie Hussin

Julie Hussin

Membre académique associé
Professeure adjointe, Université de Montréal
Sujets de recherche
Apprentissage automatique médical
Apprentissage multimodal
Apprentissage profond
Biologie computationnelle
Exploration des données

Biographie

Julie Hussin est professeure agrégée au Département de médecine de l’Université de Montréal (UdeM) et chercheuse à l’Institut de cardiologie de Montréal (ICM). Elle est titulaire d’une Chaire de recherche du Canada (niveau 2) en science responsable des données multi-omiques et préside les programmes d’études supérieures en bio-informatique à l’UdeM.

Formée en génomique statistique et évolutive, la Dre Hussin possède une vaste expérience dans l’analyse de grands ensembles de données multi-omiques provenant de cohortes de population. Ses travaux en biologie computationnelle s’inscrivent dans les domaines de la génomique médicale et de la génomique des populations, auxquels elle a apporté plusieurs avancées méthodologiques. Son programme de recherche interdisciplinaire vise à développer des outils novateurs pour la médecine de précision. Ses projets portent notamment sur l’amélioration de la prédiction et de la gestion du risque de maladies cardiométaboliques, en particulier l’insuffisance cardiaque.

Ses approches intègrent différents types de données — cliniques, génétiques, transcriptomiques, protéomiques et métabolomiques — afin de mieux comprendre les déterminants biologiques des maladies du cœur, notamment grâce à des techniques d’apprentissage non supervisé. Dans le contexte de la pandémie de COVID-19, son équipe a également dirigé le développement d’algorithmes de science des données pour analyser les données génétiques virales, soutenir la surveillance épidémiologique et étudier les interactions hôte-pathogène ainsi que l’évolution virale.

Ses travaux s’intéressent aussi à l’interprétabilité, à la généralisabilité et à l’équité des algorithmes d’apprentissage automatique appliqués à la recherche en santé. La Dre Hussin milite pour une intelligence artificielle équitable, sécuritaire et transparente dans la recherche en santé, et s’engage à favoriser l’inclusivité et la représentativité afin que ses travaux bénéficient à l’ensemble de la population.

Elle enseigne plusieurs cours de premier et de deuxième cycles en biologie computationnelle, en génétique des populations et en apprentissage automatique appliqué à la génomique. Avant de se joindre à l’UdeM comme professeure, elle a été boursière postdoctorale du Human Frontier Science Program au Wellcome Trust Centre for Human Genetics de l’Université d’Oxford (Linacre College), ainsi que chercheuse invitée à l’Université McGill.

Étudiants actuels

Collaborateur·rice de recherche - UdeM
Doctorat - UdeM
Doctorat - UdeM
Doctorat - UdeM
Co-superviseur⋅e :
Doctorat - UdeM
Co-superviseur⋅e :

Publications

Data-driven approaches for genetic characterization of SARS-CoV-2 lineages
Isabel Gamache
Arnaud N’Guessan
Justin Pelletier
David J. Hamelin
Carmen Lia Murall
Raphael Poujol
Jean-Christophe Grenier
Martin Smith
Etienne Caron
Morgan Craig
Jesse Shapiro
The genome of the Severe Acute Respiratory Syndrome coronavirus 2 (SARS-CoV-2), the pathogen that causes coronavirus disease 2019 (COVID-19)… (voir plus), has been sequenced at an unprecedented scale, leading to a tremendous amount of viral genome sequencing data. To understand the evolution of this virus in humans, and to assist in tracing infection pathways and designing preventive strategies, we present a set of computational tools that span phylogenomics, population genetics and machine learning approaches. To illustrate the utility of this toolbox, we detail an in depth analysis of the genetic diversity of SARS-CoV-2 in first year of the COVID-19 pandemic, using 329,854 high-quality consensus sequences published in the GISAID database during the pre-vaccination phase. We demonstrate that, compared to standard phylogenetic approaches, haplotype networks can be computed efficiently on much larger datasets, enabling real-time analyses. Furthermore, time series change of Tajima’s D provides a powerful metric of population expansion. Unsupervised learning techniques further highlight key steps in variant detection and facilitate the study of the role of this genomic variation in the context of SARS-CoV-2 infection, with Multiscale PHATE methodology identifying fine-scale structure in the SARS-CoV-2 genetic data that underlies the emergence of key lineages. The computational framework presented here is useful for real-time genomic surveillance of SARS-CoV-2 and could be applied to any pathogen that threatens the health of worldwide populations of humans and other organisms.
Multiscale PHATE Exploration of SARS-CoV-2 Data Reveals Multimodal Signatures of Disease
Manik Kuchroo
Patrick Wong
Jean-Christophe Grenier
Dennis Shung
Carolina Lucas
Jon Klein
Daniel B. Burkhardt
Scott Gigante
Abhinav Godavarthi
Benjamin Israelow
Tianyang Mao
Ji Eun Oh
Julio Silva
Takehiro Takahashi
Camila D. Odio
Arnau Casanovas-Massana
John Fournier
Shelli Farhadian … (voir 7 de plus)
Charles S. Dela Cruz
Albert I. Ko
F. Perry Wilson
Akiko Iwasaki
Diet Networks: Thin Parameters for Fat Genomics
Learning tasks such as those involving genomic data often poses a serious challenge: the number of input features can be orders of magnitude… (voir plus) larger than the number of training examples, making it difficult to avoid overfitting, even when using the known regularization techniques. We focus here on tasks in which the input is a description of the genetic variation specific to a patient, the single nucleotide polymorphisms (SNPs), yielding millions of ternary inputs. Improving the ability of deep learning to handle such datasets could have an important impact in medical research, more specifically in precision medicine, where high-dimensional data regarding a particular patient is used to make predictions of interest. Even though the amount of data for such tasks is increasing, this mismatch between the number of examples and the number of inputs remains a concern. Naive implementations of classifier neural networks involve a huge number of free parameters in their first layer (number of input features times number of hidden units): each input feature is associated with as many parameters as there are hidden units. We propose a novel neural network parametrization which considerably reduces the number of free parameters. It is based on the idea that we can first learn or provide a distributed representation for each input feature (e.g. for each position in the genome where variations are observed in data), and then learn (with another neural network called the parameter prediction network) how to map a feature's distributed representation (based on the feature's identity not its value) to the vector of parameters specific to that feature in the classifier neural network (the weights which link the value of the feature to each of the hidden units). This approach views the problem of producing the parameters associated with each feature as a multi-task learning problem. We show experimentally on a population stratification task of interest to medical studies that the proposed approach can significantly reduce both the number of parameters and the error rate of the classifier.
Diet Networks: Thin Parameters for Fat Genomics
Learning tasks such as those involving genomic data often poses a serious challenge: the number of input features can be orders of magnitude… (voir plus) larger than the number of training examples, making it difficult to avoid overfitting, even when using the known regularization techniques. We focus here on tasks in which the input is a description of the genetic variation specific to a patient, the single nucleotide polymorphisms (SNPs), yielding millions of ternary inputs. Improving the ability of deep learning to handle such datasets could have an important impact in medical research, more specifically in precision medicine, where high-dimensional data regarding a particular patient is used to make predictions of interest. Even though the amount of data for such tasks is increasing, this mismatch between the number of examples and the number of inputs remains a concern. Naive implementations of classifier neural networks involve a huge number of free parameters in their first layer (number of input features times number of hidden units): each input feature is associated with as many parameters as there are hidden units. We propose a novel neural network parametrization which considerably reduces the number of free parameters. It is based on the idea that we can first learn or provide a distributed representation for each input feature (e.g. for each position in the genome where variations are observed in data), and then learn (with another neural network called the parameter prediction network) how to map a feature's distributed representation (based on the feature's identity not its value) to the vector of parameters specific to that feature in the classifier neural network (the weights which link the value of the feature to each of the hidden units). This approach views the problem of producing the parameters associated with each feature as a multi-task learning problem. We show experimentally on a population stratification task of interest to medical studies that the proposed approach can significantly reduce both the number of parameters and the error rate of the classifier.
Diet Networks: Thin Parameters for Fat Genomic
Learning tasks such as those involving genomic data often poses a serious challenge: the number of input features can be orders of magnitude… (voir plus) larger than the number of training examples, making it difficult to avoid overfitting, even when using the known regularization techniques. We focus here on tasks in which the input is a description of the genetic variation specific to a patient, the single nucleotide polymorphisms (SNPs), yielding millions of ternary inputs. Improving the ability of deep learning to handle such datasets could have an important impact in precision medicine, where high-dimensional data regarding a particular patient is used to make predictions of interest. Even though the amount of data for such tasks is increasing, this mismatch between the number of examples and the number of inputs remains a concern. Naive implementations of classifier neural networks involve a huge number of free parameters in their first layer: each input feature is associated with as many parameters as there are hidden units. We propose a novel neural network parametrization which considerably reduces the number of free parameters. It is based on the idea that we can first learn or provide a distributed representation for each input feature (e.g. for each position in the genome where variations are observed), and then learn (with another neural network called the parameter prediction network) how to map a feature's distributed representation to the vector of parameters specific to that feature in the classifier neural network (the weights which link the value of the feature to each of the hidden units). We show experimentally on a population stratification task of interest to medical studies that the proposed approach can significantly reduce both the number of parameters and the error rate of the classifier.