Portrait de Julie Hussin

Julie Hussin

Membre académique associé
Professeure adjointe, Université de Montréal
Sujets de recherche
Apprentissage automatique médical
Apprentissage multimodal
Apprentissage profond
Biologie computationnelle
Exploration des données

Biographie

Julie Hussin est professeure agrégée à la Faculté de médecine de l'Université de Montréal (UdeM) et chercheuse à l'Institut de cardiologie de Montréal (ICM). Elle est aussi chercheuse-boursière junior 2 du Fonds de recherche du Québec - Santé (FRQS) et responsable des programmes d'études supérieures en bio-informatique à l'UdeM.

Julie Hussin a été formée en génomique statistique et évolutive et possède une vaste expérience dans l'analyse de données multi-omiques issues de vastes cohortes populationnelles. Ses travaux en biologie computationnelle se concentrent principalement sur la génomique médicale et des populations, contribuant à plusieurs avancées méthodologiques dans ces domaines. Son travail interdisciplinaire vise à développer des outils novateurs pour la médecine de précision.

Ses projets de recherche se focalisent sur l'amélioration de la prédiction de risques et la gestion des maladies cardiométaboliques, en particulier dans le cas de l'insuffisance cardiaque. Les méthodologies utilisées dans son groupe intègrent différentes sources de données, notamment des données cliniques, génétiques, transcriptomiques, protéomiques et métabolomiques, pour permettre la découverte de nouvelles informations sur les déterminants biologiques des maladies cardiaques, notamment par des techniques d’apprentissage non supervisé. Dans le contexte de la pandémie de COVID-19, son équipe a également développé des approches d’analyse de données génétiques des virus, pour la surveillance virale et l’étude des interactions hôte-pathogène ainsi que l'évolution virale.

Ses intérêts de recherche comprennent également l'interprétabilité, la généralisation et l'équité des algorithmes d'apprentissage automatique dans la recherche en santé. Julie Hussin s'engage à promouvoir activement une IA équitable, sûre et transparente dans la recherche en santé et s'efforce d'assurer l'inclusivité et la représentativité des individus dans sa recherche, pour que son travail bénéficie à l'ensemble de la population. Elle partage son expertise en donnant plusieurs cours de bio-informatique et de génétique des populations, ainsi que d’apprentissage automatique en génomique. Avant de se joindre à l'Université de Montréal en tant que professeure, elle a été boursière postdoctorale du Human Frontier Science Program au Wellcome Trust Centre for Human Genetics de l'Université d'Oxford (Linacre College) et chercheuse postdoctorale invitée à l'Université McGill.

Étudiants actuels

Doctorat - UdeM
Doctorat - UdeM
Doctorat - UdeM
Maîtrise recherche - UdeM
Doctorat - UdeM
Co-superviseur⋅e :
Doctorat - UdeM
Co-superviseur⋅e :

Publications

Diet Networks: Thin Parameters for Fat Genomics
pierre luc carrier
Akram Erraqabi
Tristan Sylvain
Alex Auvolat
Etienne Dejoie
Marie-Pierre Dubé
Learning tasks such as those involving genomic data often poses a serious challenge: the number of input features can be orders of magnitude… (voir plus) larger than the number of training examples, making it difficult to avoid overfitting, even when using the known regularization techniques. We focus here on tasks in which the input is a description of the genetic variation specific to a patient, the single nucleotide polymorphisms (SNPs), yielding millions of ternary inputs. Improving the ability of deep learning to handle such datasets could have an important impact in medical research, more specifically in precision medicine, where high-dimensional data regarding a particular patient is used to make predictions of interest. Even though the amount of data for such tasks is increasing, this mismatch between the number of examples and the number of inputs remains a concern. Naive implementations of classifier neural networks involve a huge number of free parameters in their first layer (number of input features times number of hidden units): each input feature is associated with as many parameters as there are hidden units. We propose a novel neural network parametrization which considerably reduces the number of free parameters. It is based on the idea that we can first learn or provide a distributed representation for each input feature (e.g. for each position in the genome where variations are observed in data), and then learn (with another neural network called the parameter prediction network) how to map a feature's distributed representation (based on the feature's identity not its value) to the vector of parameters specific to that feature in the classifier neural network (the weights which link the value of the feature to each of the hidden units). This approach views the problem of producing the parameters associated with each feature as a multi-task learning problem. We show experimentally on a population stratification task of interest to medical studies that the proposed approach can significantly reduce both the number of parameters and the error rate of the classifier.
Diet Networks: Thin Parameters for Fat Genomics
pierre luc carrier
Akram Erraqabi
Tristan Sylvain
Alex Auvolat
Etienne Dejoie
Marie-Pierre Dubé
Learning tasks such as those involving genomic data often poses a serious challenge: the number of input features can be orders of magnitude… (voir plus) larger than the number of training examples, making it difficult to avoid overfitting, even when using the known regularization techniques. We focus here on tasks in which the input is a description of the genetic variation specific to a patient, the single nucleotide polymorphisms (SNPs), yielding millions of ternary inputs. Improving the ability of deep learning to handle such datasets could have an important impact in medical research, more specifically in precision medicine, where high-dimensional data regarding a particular patient is used to make predictions of interest. Even though the amount of data for such tasks is increasing, this mismatch between the number of examples and the number of inputs remains a concern. Naive implementations of classifier neural networks involve a huge number of free parameters in their first layer (number of input features times number of hidden units): each input feature is associated with as many parameters as there are hidden units. We propose a novel neural network parametrization which considerably reduces the number of free parameters. It is based on the idea that we can first learn or provide a distributed representation for each input feature (e.g. for each position in the genome where variations are observed in data), and then learn (with another neural network called the parameter prediction network) how to map a feature's distributed representation (based on the feature's identity not its value) to the vector of parameters specific to that feature in the classifier neural network (the weights which link the value of the feature to each of the hidden units). This approach views the problem of producing the parameters associated with each feature as a multi-task learning problem. We show experimentally on a population stratification task of interest to medical studies that the proposed approach can significantly reduce both the number of parameters and the error rate of the classifier.
Diet Networks: Thin Parameters for Fat Genomic
pierre luc carrier
Akram Erraqabi
Tristan Sylvain
Alex Auvolat
Etienne Dejoie
M. Dubé
Learning tasks such as those involving genomic data often poses a serious challenge: the number of input features can be orders of magnitude… (voir plus) larger than the number of training examples, making it difficult to avoid overfitting, even when using the known regularization techniques. We focus here on tasks in which the input is a description of the genetic variation specific to a patient, the single nucleotide polymorphisms (SNPs), yielding millions of ternary inputs. Improving the ability of deep learning to handle such datasets could have an important impact in precision medicine, where high-dimensional data regarding a particular patient is used to make predictions of interest. Even though the amount of data for such tasks is increasing, this mismatch between the number of examples and the number of inputs remains a concern. Naive implementations of classifier neural networks involve a huge number of free parameters in their first layer: each input feature is associated with as many parameters as there are hidden units. We propose a novel neural network parametrization which considerably reduces the number of free parameters. It is based on the idea that we can first learn or provide a distributed representation for each input feature (e.g. for each position in the genome where variations are observed), and then learn (with another neural network called the parameter prediction network) how to map a feature's distributed representation to the vector of parameters specific to that feature in the classifier neural network (the weights which link the value of the feature to each of the hidden units). We show experimentally on a population stratification task of interest to medical studies that the proposed approach can significantly reduce both the number of parameters and the error rate of the classifier.