Portrait of Julie Hussin

Julie Hussin

Associate Academic Member
Associate Professor, Université de Montréal

Biography

Julie Hussin is an associate professor in the Faculty of Medicine at Université de Montréal (UdeM) and a researcher at the Montréal Heart Institute. She is a Junior 2 Research Scholar funded by the Fonds de Recherche du Québec en Santé (FRQS) and chair of the graduate program in bioinformatics at UdeM.

Trained in statistical and evolutionary genomics, Hussin has significant experience in handling multi-omics datasets from large population cohorts. Her work in computational biology is relevant to medical and population genomics, fields in which she has contributed to several methodological advances. Her interdisciplinary work, which aims to develop innovative tools for precision medicine, focuses on improving risk prediction and the management of cardiometabolic disease, particularly heart failure.

Her approaches integrate various data types, such as clinical, genetic, transcriptomic, proteomic and metabolomic data, to uncover new insights into the biological determinants of heart disease, notably through unsupervised learning techniques. In the context of the COVID-19 pandemic, Hussin’s group also led the development of data science algorithms to analyze viral genetic data, aid viral surveillance efforts, and study host-pathogen interactions and viral evolution.

Her work also focuses on the interpretability, generalizability and fairness of machine learning algorithms in health research. Hussin is dedicated to promoting fair, safe and transparent AI in health research, striving for inclusivity and representation to ensure her work benefits all segments of the population. Her expertise also extends to the field of fair, safe and transparent AI for health research. She teaches several undergraduate and graduate courses in computational biology and population genetics, as well as machine learning for genomics. Prior to joining UdeM as a professor, she was a Human Frontier Postdoctoral Fellow at the Wellcome Trust Centre for Human Genetics at the University of Oxford (Linacre College), and a visiting fellow at McGill University.

Current Students

PhD - Université de Montréal
Co-supervisor :
PhD - Université de Montréal
PhD - Université de Montréal
PhD - Université de Montréal
Master's Research - Université de Montréal

Publications

Population Genomics Approaches for Genetic Characterization of SARS-CoV-2 Lineages
Fatima Mostefai
I. Gamache
Arnaud N’Guessan
Justin Pelletier
Jessie Huang
Carmen Lia Murall
Ahmad Pesaranghader
Vanda Gaonac'h-Lovejoy
David J. Hamelin
Raphael Poujol
Jean-Christophe Grenier
Martin W. Smith
Étienne Caron
Morgan Craig
Smita Krishnaswamy
B. Jesse Shapiro
The genome of the Severe Acute Respiratory Syndrome coronavirus 2 (SARS-CoV-2), the pathogen that causes coronavirus disease 2019 (COVID-19)… (see more), has been sequenced at an unprecedented scale leading to a tremendous amount of viral genome sequencing data. To assist in tracing infection pathways and design preventive strategies, a deep understanding of the viral genetic diversity landscape is needed. We present here a set of genomic surveillance tools from population genetics which can be used to better understand the evolution of this virus in humans. To illustrate the utility of this toolbox, we detail an in depth analysis of the genetic diversity of SARS-CoV-2 in first year of the COVID-19 pandemic. We analyzed 329,854 high-quality consensus sequences published in the GISAID database during the pre-vaccination phase. We demonstrate that, compared to standard phylogenetic approaches, haplotype networks can be computed efficiently on much larger datasets. This approach enables real-time lineage identification, a clear description of the relationship between variants of concern, and efficient detection of recurrent mutations. Furthermore, time series change of Tajima's D by haplotype provides a powerful metric of lineage expansion. Finally, principal component analysis (PCA) highlights key steps in variant emergence and facilitates the visualization of genomic variation in the context of SARS-CoV-2 diversity. The computational framework presented here is simple to implement and insightful for real-time genomic surveillance of SARS-CoV-2 and could be applied to any pathogen that threatens the health of populations of humans and other organisms.
Population Genomics Approaches for Genetic Characterization of SARS-CoV-2 Lineages
Fatima Mostefai
Isabel Gamache
Arnaud N’Guessan
Justin Pelletier
Jessie Huang
Carmen Lia Murall
Ahmad Pesaranghader
Vanda Gaonac'h-Lovejoy
David J. Hamelin
Raphael Poujol
Jean-Christophe Grenier
Martin Smith
Etienne Caron
Morgan Craig
Smita Krishnaswamy
B. Jesse Shapiro
Genomic epidemiology and associated clinical outcomes of a SARS-CoV-2 outbreak in a general adult hospital in Quebec
Bastien Paré
Marieke Rozendaal
Sacha Morin
Raphael Poujol
Fatima Mostefai
Shawn M. Simpson
Jean-Christophe Grenier
Léa Kaufmann
Henry Xing
Miguelle Sanchez
Ariane Yechouron
Ronald Racette
Ivan Pavlov
Martin Smith
Patient health records and whole viral genomes from an early SARS-CoV-2 outbreak in a Quebec hospital reveal features associated with favorable outcomes
Bastien Paré
Marieke Rozendaal
Sacha Morin
Léa Kaufmann
Shawn M. Simpson
Raphael Poujol
Fatima Mostefai
Jean-Christophe Grenier
Henry Xing
Miguelle Sanchez
Ariane Yechouron
Ronald Racette
Ivan Pavlov
Martin Smith
Data-driven approaches for genetic characterization of SARS-CoV-2 lineages
Fatima Mostefai
Isabel Gamache
Jessie Huang
Arnaud N’Guessan
Justin Pelletier
Ahmad Pesaranghader
David J. Hamelin
Carmen Lia Murall
Raphael Poujol
Jean-Christophe Grenier
Martin Smith
Etienne Caron
Morgan Craig
Jesse Shapiro
Smita Krishnaswamy
The genome of the Severe Acute Respiratory Syndrome coronavirus 2 (SARS-CoV-2), the pathogen that causes coronavirus disease 2019 (COVID-19)… (see more), has been sequenced at an unprecedented scale, leading to a tremendous amount of viral genome sequencing data. To understand the evolution of this virus in humans, and to assist in tracing infection pathways and designing preventive strategies, we present a set of computational tools that span phylogenomics, population genetics and machine learning approaches. To illustrate the utility of this toolbox, we detail an in depth analysis of the genetic diversity of SARS-CoV-2 in first year of the COVID-19 pandemic, using 329,854 high-quality consensus sequences published in the GISAID database during the pre-vaccination phase. We demonstrate that, compared to standard phylogenetic approaches, haplotype networks can be computed efficiently on much larger datasets, enabling real-time analyses. Furthermore, time series change of Tajima’s D provides a powerful metric of population expansion. Unsupervised learning techniques further highlight key steps in variant detection and facilitate the study of the role of this genomic variation in the context of SARS-CoV-2 infection, with Multiscale PHATE methodology identifying fine-scale structure in the SARS-CoV-2 genetic data that underlies the emergence of key lineages. The computational framework presented here is useful for real-time genomic surveillance of SARS-CoV-2 and could be applied to any pathogen that threatens the health of worldwide populations of humans and other organisms.
Multiscale PHATE Exploration of SARS-CoV-2 Data Reveals Multimodal Signatures of Disease
Manik Kuchroo
Jessie Huang
Patrick Wong
Jean-Christophe Grenier
Dennis Shung
Alexander Tong
Carolina Lucas
Jon Klein
Daniel B. Burkhardt
Scott Gigante
Abhinav Godavarthi
Benjamin Israelow
Tianyang Mao
Ji Eun Oh
Julio Silva
Takehiro Takahashi
Camila D. Odio
Arnau Casanovas-Massana
John Fournier
Shelli Farhadian … (see 7 more)
Charles S. Dela Cruz
Albert I. Ko
F. Perry Wilson
Akiko Iwasaki
Smita Krishnaswamy
Diet Networks: Thin Parameters for Fat Genomics
pierre luc carrier
Akram Erraqabi
Tristan Sylvain
Alex Auvolat
Etienne Dejoie
Marc-André Legault
Marie-Pierre Dubé
Learning tasks such as those involving genomic data often poses a serious challenge: the number of input features can be orders of magnitude… (see more) larger than the number of training examples, making it difficult to avoid overfitting, even when using the known regularization techniques. We focus here on tasks in which the input is a description of the genetic variation specific to a patient, the single nucleotide polymorphisms (SNPs), yielding millions of ternary inputs. Improving the ability of deep learning to handle such datasets could have an important impact in medical research, more specifically in precision medicine, where high-dimensional data regarding a particular patient is used to make predictions of interest. Even though the amount of data for such tasks is increasing, this mismatch between the number of examples and the number of inputs remains a concern. Naive implementations of classifier neural networks involve a huge number of free parameters in their first layer (number of input features times number of hidden units): each input feature is associated with as many parameters as there are hidden units. We propose a novel neural network parametrization which considerably reduces the number of free parameters. It is based on the idea that we can first learn or provide a distributed representation for each input feature (e.g. for each position in the genome where variations are observed in data), and then learn (with another neural network called the parameter prediction network) how to map a feature's distributed representation (based on the feature's identity not its value) to the vector of parameters specific to that feature in the classifier neural network (the weights which link the value of the feature to each of the hidden units). This approach views the problem of producing the parameters associated with each feature as a multi-task learning problem. We show experimentally on a population stratification task of interest to medical studies that the proposed approach can significantly reduce both the number of parameters and the error rate of the classifier.
Diet Networks: Thin Parameters for Fat Genomics
pierre luc carrier
Akram Erraqabi
Tristan Sylvain
Alex Auvolat
Etienne Dejoie
Marc-André Legault
Marie-Pierre Dubé
Learning tasks such as those involving genomic data often poses a serious challenge: the number of input features can be orders of magnitude… (see more) larger than the number of training examples, making it difficult to avoid overfitting, even when using the known regularization techniques. We focus here on tasks in which the input is a description of the genetic variation specific to a patient, the single nucleotide polymorphisms (SNPs), yielding millions of ternary inputs. Improving the ability of deep learning to handle such datasets could have an important impact in medical research, more specifically in precision medicine, where high-dimensional data regarding a particular patient is used to make predictions of interest. Even though the amount of data for such tasks is increasing, this mismatch between the number of examples and the number of inputs remains a concern. Naive implementations of classifier neural networks involve a huge number of free parameters in their first layer (number of input features times number of hidden units): each input feature is associated with as many parameters as there are hidden units. We propose a novel neural network parametrization which considerably reduces the number of free parameters. It is based on the idea that we can first learn or provide a distributed representation for each input feature (e.g. for each position in the genome where variations are observed in data), and then learn (with another neural network called the parameter prediction network) how to map a feature's distributed representation (based on the feature's identity not its value) to the vector of parameters specific to that feature in the classifier neural network (the weights which link the value of the feature to each of the hidden units). This approach views the problem of producing the parameters associated with each feature as a multi-task learning problem. We show experimentally on a population stratification task of interest to medical studies that the proposed approach can significantly reduce both the number of parameters and the error rate of the classifier.
Diet Networks: Thin Parameters for Fat Genomic
pierre luc carrier
Akram Erraqabi
Tristan Sylvain
Alex Auvolat
Etienne Dejoie
Marc-andr'e Legault
M. Dubé
Learning tasks such as those involving genomic data often poses a serious challenge: the number of input features can be orders of magnitude… (see more) larger than the number of training examples, making it difficult to avoid overfitting, even when using the known regularization techniques. We focus here on tasks in which the input is a description of the genetic variation specific to a patient, the single nucleotide polymorphisms (SNPs), yielding millions of ternary inputs. Improving the ability of deep learning to handle such datasets could have an important impact in precision medicine, where high-dimensional data regarding a particular patient is used to make predictions of interest. Even though the amount of data for such tasks is increasing, this mismatch between the number of examples and the number of inputs remains a concern. Naive implementations of classifier neural networks involve a huge number of free parameters in their first layer: each input feature is associated with as many parameters as there are hidden units. We propose a novel neural network parametrization which considerably reduces the number of free parameters. It is based on the idea that we can first learn or provide a distributed representation for each input feature (e.g. for each position in the genome where variations are observed), and then learn (with another neural network called the parameter prediction network) how to map a feature's distributed representation to the vector of parameters specific to that feature in the classifier neural network (the weights which link the value of the feature to each of the hidden units). We show experimentally on a population stratification task of interest to medical studies that the proposed approach can significantly reduce both the number of parameters and the error rate of the classifier.