Portrait de Smita Krishnaswamy

Smita Krishnaswamy

Membre affilié
Professeure associée, Yale University
Université de Montréal
Yale
Sujets de recherche
Apprentissage de représentations
Apprentissage profond
Apprentissage profond géométrique
Apprentissage spectral
Apprentissage sur variétés
Biologie computationnelle
Géométrie des données
IA en santé
Interfaces cerveau-ordinateur
Modèles génératifs
Modélisation moléculaire
Neurosciences computationnelles
Parcimonie des données
Réseaux de neurones en graphes
Science cognitive
Science des données
Systèmes dynamiques
Théorie de l'information

Biographie

Notre laboratoire travaille sur le développement de méthodes mathématiques fondamentales d'apprentissage automatique et d'apprentissage profond qui intègrent l'apprentissage basé sur les graphes, le traitement du signal, la théorie de l'information, la géométrie et la topologie des données, le transport optimal et la modélisation dynamique qui sont capables d'effectuer une analyse exploratoire, une inférence scientifique, une interprétation et une génération d'hypothèses de grands ensembles de données biomédicales allant des données de cellules uniques, à l'imagerie cérébrale, aux ensembles de données structurelles moléculaires provenant des neurosciences, de la psychologie, de la biologie des cellules souches, de la biologie du cancer, des soins de santé, et de la biochimie. Nos travaux ont été déterminants pour l'apprentissage de trajectoires dynamiques à partir de données instantanées statiques, le débruitage des données, la visualisation, l'inférence de réseaux, la modélisation de structures moléculaires et bien d'autres choses encore.

Publications

Inferring Metabolic States from Single Cell Transcriptomic Data via Geometric Deep Learning
Holly Steach
Siddharth Viswanath
Yixuan He
Xitong Zhang
Natalia Ivanova
Matthew Hirn
Michael Perlmutter
Supervised latent factor modeling isolates cell-type-specific transcriptomic modules that underlie Alzheimer’s disease progression
Liam Hodgson
Yasser Iturria-Medina
Jo Anne Stratton
David A. Bennett
Novel cell states arise in embryonic cells devoid of key reprogramming factors
Scott E. Youlten
Liyun Miao
Caroline Hoppe
Curtis W. Boswell
Damir Musaev
Mario Abdelmessih
Valerie A. Tornini
Antonio J. Giraldez
The capacity for embryonic cells to differentiate relies on a large-scale reprogramming of the oocyte and sperm nucleus into a transient tot… (voir plus)ipotent state. In zebrafish, this reprogramming step is achieved by the pioneer factors Nanog, Pou5f3, and Sox19b (NPS). Yet, it remains unclear whether cells lacking this reprogramming step are directed towards wild type states or towards novel developmental canals in the Waddington landscape of embryonic development. Here we investigate the developmental fate of embryonic cells mutant for NPS by analyzing their single-cell gene expression profiles. We find that cells lacking the first developmental reprogramming steps can acquire distinct cell states. These states are manifested by gene expression modules that result from a failure of nuclear reprogramming, the persistence of the maternal program, and the activation of somatic compensatory programs. As a result, most mutant cells follow new developmental canals and acquire new mixed cell states in development. In contrast, a group of mutant cells acquire primordial germ cell-like states, suggesting that NPS-dependent reprogramming is dispensable for these cell states. Together, these results demonstrate that developmental reprogramming after fertilization is required to differentiate most canonical developmental programs, and loss of the transient totipotent state canalizes embryonic cells into new developmental states in vivo.
AAnet resolves a continuum of spatially-localized cell states to unveil tumor complexity
Aarthi Venkat
Scott E. Youlten
Beatriz P. San Juan
Carley Purcell
Matthew Amodio
Daniel B. Burkhardt
Andrew Benz
Jeff Holst
Cerys McCool
Annelie Mollbrink
Joakim Lundeberg
David van Dijk
Leonard D. Goldstein
Sarah Kummerfeld
Christine L. Chaffer
Identifying functionally important cell states and structure within a heterogeneous tumor remains a significant biological and computational… (voir plus) challenge. Moreover, current clustering or trajectory-based computational models are ill-equipped to address the notion that cancer cells reside along a phenotypic continuum. To address this, we present Archetypal Analysis network (AAnet), a neural network that learns key archetypal cell states within a phenotypic continuum of cell states in single-cell data. Applied to single-cell RNA sequencing data from pre-clinical models and a cohort of 34 clinical breast cancers, AAnet identifies archetypes that resolve distinct biological cell states and processes, including cell proliferation, hypoxia, metabolism and immune interactions. Notably, archetypes identified in primary tumors are recapitulated in matched liver, lung and lymph node metastases, demonstrating that a significant component of intratumoral heterogeneity is driven by cell intrinsic properties. Using spatial transcriptomics as orthogonal validation, AAnet-derived archetypes show discrete spatial organization within tumors, supporting their distinct archetypal biology. We further reveal that ligand:receptor cross-talk between cancer and adjacent stromal cells contributes to intra-archetypal biological mimicry. Finally, we use AAnet archetype identifiers to validate GLUT3 as a critical mediator of a hypoxic cell archetype harboring a cancer stem cell population, which we validate in human triple-negative breast cancer specimens. AAnet is a powerful tool to reveal functional cell states within complex samples from multimodal single-cell data.
BLIS-Net: Classifying and Analyzing Signals on Graphs
Charles Xu
Laney Goldman
Valentina Guo
Benjamin Hollander-Bodie
Maedee Trank-Greene
Ian Adelstein
Edward De Brouwer
Rex Ying
Michael Perlmutter
Graph neural networks (GNNs) have emerged as a powerful tool for tasks such as node classification and graph classification. However, much l… (voir plus)ess work has been done on signal classification, where the data consists of many functions (referred to as signals) defined on the vertices of a single graph. These tasks require networks designed differently from those designed for traditional GNN tasks. Indeed, traditional GNNs rely on localized low-pass filters, and signals of interest may have intricate multi-frequency behavior and exhibit long range interactions. This motivates us to introduce the BLIS-Net (Bi-Lipschitz Scattering Net), a novel GNN that builds on the previously introduced geometric scattering transform. Our network is able to capture both local and global signal structure and is able to capture both low-frequency and high-frequency information. We make several crucial changes to the original geometric scattering architecture which we prove increase the ability of our network to capture information about the input signal and show that BLIS-Net achieves superior performance on both synthetic and real-world data sets based on traffic flow and fMRI data.
Directed Scattering for Knowledge Graph-Based Cellular Signaling Analysis
Aarthi Venkat
Joyce Chew
Ferran Cardoso Rodriguez
Christopher J. Tape
Michael Perlmutter
Directed graphs are a natural model for many phenomena, in particular scientific knowledge graphs such as molecular interaction or chemical … (voir plus)reaction networks that define cellular signaling relationships. In these situations, source nodes typically have distinct biophysical properties from sinks. Due to their ordered and unidirectional relationships, many such networks also have hierarchical and multiscale structure. However, the majority of methods performing node- and edge-level tasks in machine learning do not take these properties into account, and thus have not been leveraged effectively for scientific tasks such as cellular signaling network inference. We propose a new framework called Directed Scattering Autoencoder (DSAE) which uses a directed version of a geometric scattering transform, combined with the non-linear dimensionality reduction properties of an autoencoder and the geometric properties of the hyperbolic space to learn latent hierarchies. We show this method outperforms numerous others on tasks such as embedding directed graphs and learning cellular signaling networks.
Bayesian Spectral Graph Denoising with Smoothness Prior
Samuel Leone
Xingzhi Sun
Michael Perlmutter
Here we consider the problem of denoising features associated to complex data, modeled as signals on a graph, via a smoothness prior. This i… (voir plus)s motivated in part by settings such as single-cell RNA where the data is very high-dimensional, but its structure can be captured via an affinity graph. This allows us to utilize ideas from graph signal processing. In particular, we present algorithms for the cases where the signal is perturbed by Gaussian noise, dropout, and uniformly distributed noise. The signals are assumed to follow a prior distribution defined in the frequency domain which favors signals which are smooth across the edges of the graph. By pairing this prior distribution with our three models of noise generation, we propose Maximum A Posteriori (M.A.P.) estimates of the true signal in the presence of noisy data and provide algorithms for computing the M.A.P. Finally, we demonstrate the algorithms’ ability to effectively restore signals from white noise on image data and from severe dropout in single-cell RNA sequence data.
Abstract B049: Pancreatic beta cell stress pathways drive pancreatic ductal adenocarcinoma development in obesity
Cathy C. Garcia
Aarthi Venkat
Alex Tong
Sherry Agabiti
Lauren Lawres
Rebecca Cardone
Richard G. Kibbey
Mandar Deepak Muzumdar
Assessing Neural Network Representations During Training Using Noise-Resilient Diffusion Spectral Entropy
Danqi Liao
Chen Liu
Benjamin W Christensen
Alexander Tong
Guillaume Huguet
Maximilian Nickel
Ian Adelstein
Entropy and mutual information in neural networks provide rich information on the learning process, but they have proven difficult to comput… (voir plus)e reliably in high dimensions. Indeed, in noisy and high-dimensional data, traditional estimates in ambient dimensions approach a fixed entropy and are prohibitively hard to compute. To address these issues, we leverage data geometry to access the underlying manifold and reliably compute these information-theoretic measures. Specifically, we define diffusion spectral entropy (DSE) in neural representations of a dataset as well as diffusion spectral mutual information (DSMI) between different variables representing data. First, we show that they form noise-resistant measures of intrinsic dimensionality and relationship strength in high-dimensional simulated data that outperform classic Shannon entropy, nonparametric estimation, and mutual information neural estimation (MINE). We then study the evolution of representations in classification networks with supervised learning, self-supervision, or overfitting. We observe that (1) DSE of neural representations increases during training; (2) DSMI with the class label increases during generalizable learning but stays stagnant during overfitting; (3) DSMI with the input signal shows differing trends: on MNIST it increases, while on CIFAR-10 and STL-10 it decreases. Finally, we show that DSE can be used to guide better network initialization and that DSMI can be used to predict downstream classification accuracy across 962 models on ImageNet.
Learnable Filters for Geometric Scattering Modules
Alexander Tong
Frederik Wenkel
Dhananjay Bhaskar
Kincaid MacDonald
Jackson Grady
Michael Perlmutter
Inferring dynamic regulatory interaction graphs from time series data with perturbations
Dhananjay Bhaskar
Daniel Sumner Magruder
Edward De Brouwer
Matheo Morales
Aarthi Venkat
Frederik Wenkel
Graph topological property recovery with heat and wave dynamics-based features on graphs
Dhananjay Bhaskar
Yanlei Zhang
Charles Xu
Xingzhi Sun
Oluwadamilola Fasina
Maximilian Nickel
Michael Perlmutter