Portrait of Smita Krishnaswamy

Smita Krishnaswamy

Affiliate Member
Associate Professor, Yale University
Université de Montréal
Yale
Research Topics
AI in Health
Brain-computer Interfaces
Cognitive Science
Computational Biology
Computational Neuroscience
Data Geometry
Data Science
Data Sparsity
Deep Learning
Dynamical Systems
Generative Models
Geometric Deep Learning
Graph Neural Networks
Information Theory
Manifold Learning
Molecular Modeling
Representation Learning
Spectral Learning

Biography

Our lab works on developing foundational mathematical machine learning and deep learning methods that incorporate graph-based learning, signal processing, information theory, data geometry and topology, optimal transport and dynamics modeling that are capable of exploratory analysis, scientific inference, interpretation and hypothesis generation big biomedical datasets ranging from single-cell data, to brain imaging, to molecular structural datasets arising from neuroscience, psychology, stem cell biology, cancer biology, healthcare, and biochemistry. Our works have been instrumental in dynamic trajectory learning from static snapshot data, data denoising, visualization, network inference, molecular structure modeling and more.

Publications

Deep Learning Unlocks the True Potential of Organ Donation after Circulatory Death with Accurate Prediction of Time-to-Death
Xingzhi Sun
Edward De Brouwer
Chen Liu
Ramesh Batra
𝟏
Increasing the number of organ donations after circulatory death (DCD) has been identified as one of the most important ways of addressing t… (see more)he ongoing organ shortage. While recent technological advances in organ transplantation have increased their success rate, a substantial challenge in increasing the number of DCD donations resides in the uncertainty regarding the timing of cardiac death after terminal extubation, impacting the risk of prolonged ischemic organ injury, and negatively affecting post-transplant outcomes. In this study, we trained and externally validated an ODE-RNN model, which combines recurrent neural network with neural ordinary equations and excels in processing irregularly-sampled time series data. The model is designed to predict time-to-death following terminal extubation in the intensive care unit (ICU) using the last 24 hours of clinical observations. Our model was trained on a cohort of 3,238 patients from Yale New Haven Hospital, and validated on an external cohort of 1,908 patients from six hospitals across Connecticut. The model achieved accuracies of 95.3 {+/-} 1.0% and 95.4 {+/-} 0.7% for predicting whether death would occur in the first 30 and 60 minutes, respectively, with a calibration error of 0.024 {+/-} 0.009. Heart rate, respiratory rate, mean arterial blood pressure (MAP), oxygen saturation (SpO2), and Glasgow Coma Scale (GCS) scores were identified as the most important predictors. Surpassing existing clinical scores, our model sets the stage for reduced organ acquisition costs and improved post-transplant outcomes.
Deep Learning Unlocks the True Potential of Organ Donation after Circulatory Death with Accurate Prediction of Time-to-Death
Xingzhi Sun
Edward De Brouwer
Chen Liu
Ramesh Batra
𝟏
Increasing the number of organ donations after circulatory death (DCD) has been identified as one of the most important ways of addressing t… (see more)he ongoing organ shortage. While recent technological advances in organ transplantation have increased their success rate, a substantial challenge in increasing the number of DCD donations resides in the uncertainty regarding the timing of cardiac death after terminal extubation, impacting the risk of prolonged ischemic organ injury, and negatively affecting post-transplant outcomes. In this study, we trained and externally validated an ODE-RNN model, which combines recurrent neural network with neural ordinary equations and excels in processing irregularly-sampled time series data. The model is designed to predict time-to-death following terminal extubation in the intensive care unit (ICU) using the last 24 hours of clinical observations. Our model was trained on a cohort of 3,238 patients from Yale New Haven Hospital, and validated on an external cohort of 1,908 patients from six hospitals across Connecticut. The model achieved accuracies of 95.3 {+/-} 1.0% and 95.4 {+/-} 0.7% for predicting whether death would occur in the first 30 and 60 minutes, respectively, with a calibration error of 0.024 {+/-} 0.009. Heart rate, respiratory rate, mean arterial blood pressure (MAP), oxygen saturation (SpO2), and Glasgow Coma Scale (GCS) scores were identified as the most important predictors. Surpassing existing clinical scores, our model sets the stage for reduced organ acquisition costs and improved post-transplant outcomes.
ImmunoStruct: Integration of protein sequence, structure, and biochemical properties for immunogenicity prediction and interpretation
Kevin Bijan Givechian
João Felipe Rocha
Edward Yang
Chen Liu
Kerrie Greene
Rex Ying
Etienne Caron
Akiko Iwasaki
ImmunoStruct: Integration of protein sequence, structure, and biochemical properties for immunogenicity prediction and interpretation
Kevin B. Givechian
João F. Rocha
Edward Yang
Chen Liu
Kerrie Greene
Rex Ying
Etienne Caron
Akiko Iwasaki
Epitope-based vaccines are promising therapeutic modalities for infectious diseases and cancer, but identifying immunogenic epitopes is chal… (see more)lenging. The vast majority of prediction methods are sequence-based, and do not incorporate wide-scale structure data and biochemical properties across each peptide-MHC (pMHC) complex. We present ImmunoStruct, a deep-learning model that integrates sequence, structural, and biochemical information to predict multi-allele class-I pMHC immunogenicity. By leveraging a multimodal dataset of ∼ 27,000 peptide-MHC complexes that we generated with AlphaFold, we demonstrate that ImmunoStruct improves immunogenicity prediction performance and interpretability beyond existing methods, across infectious disease epitopes and cancer neoepitopes. We further show strong alignment with in vitro assay results for a set of SARS-CoV-2 epitopes. This work also presents a new architecture that incorporates equivariant graph processing and multi-modal data integration for the long standing task in immunotherapy.
ProtSCAPE: Mapping the landscape of protein conformations in molecular dynamics
Siddharth Viswanath
Dhananjay Bhaskar
David R. Johnson
João F. Rocha
Egbert Castro
Jackson Grady
Alex T. Grigas
Michael Perlmutter
Corey S. O'Hern
Understanding the dynamic nature of protein structures is essential for comprehending their biological functions. While significant progress… (see more) has been made in predicting static folded structures, modeling protein motions on microsecond to millisecond scales remains challenging. To address these challenges, we introduce a novel deep learning architecture, Protein Transformer with Scattering, Attention, and Positional Embedding (ProtSCAPE), which leverages the geometric scattering transform alongside transformer-based attention mechanisms to capture protein dynamics from molecular dynamics (MD) simulations. ProtSCAPE utilizes the multi-scale nature of the geometric scattering transform to extract features from protein structures conceptualized as graphs and integrates these features with dual attention structures that focus on residues and amino acid signals, generating latent representations of protein trajectories. Furthermore, ProtSCAPE incorporates a regression head to enforce temporally coherent latent representations.
Convergence of Manifold Filter-Combine Networks
David R. Johnson
Joyce Chew
Siddharth Viswanath
Edward De Brouwer
Deanna Needell
Michael Perlmutter
In order to better understand manifold neural networks (MNNs), we introduce Manifold Filter-Combine Networks (MFCNs). The filter-combine fra… (see more)mework parallels the popular aggregate-combine paradigm for graph neural networks (GNNs) and naturally suggests many interesting families of MNNs which can be interpreted as the manifold analog of various popular GNNs. We then propose a method for implementing MFCNs on high-dimensional point clouds that relies on approximating the manifold by a sparse graph. We prove that our method is consistent in the sense that it converges to a continuum limit as the number of data points tends to infinity.
Convergence of Manifold Filter-Combine Networks
David R. Johnson
Joyce A. Chew
Siddharth Viswanath
Edward De Brouwer
Deanna Needell
Michael Perlmutter
In order to better understand manifold neural networks (MNNs), we introduce Manifold Filter-Combine Networks (MFCNs). The filter-combine fra… (see more)mework parallels the popular aggregate-combine paradigm for graph neural networks (GNNs) and naturally suggests many interesting families of MNNs which can be interpreted as the manifold analog of various popular GNNs. We then propose a method for implementing MFCNs on high-dimensional point clouds that relies on approximating the manifold by a sparse graph. We prove that our method is consistent in the sense that it converges to a continuum limit as the number of data points tends to infinity.
Neuro-GSTH: A Geometric Scattering and Persistent Homology Framework for Uncovering Spatiotemporal Signatures in Neural Activity
Dhananjay Bhaskar
Jessica Moore
Feng Gao
Bastian Rieck
Firas A. Khasawneh
Elizabeth Munch
Valentina Greco
Understanding how neurons communicate and coordinate their activity is essential for unraveling the brain’s complex functionality. To anal… (see more)yze the intricate spatiotemporal dynamics of neural signaling, we developed Geometric Scattering Trajectory Homology (neuro-GSTH), a novel framework that captures time-evolving neural signals and encodes them into low-dimensional representations. GSTH integrates geometric scattering transforms, which extract multiscale features from brain signals modeled on anatomical graphs, with t-PHATE, a manifold learning method that maps the temporal evolution of neural activity. Topological descriptors from computational homology are then applied to characterize the global structure of these neural trajectories, enabling the quantification and differentiation of spatiotemporal brain dynamics. We demonstrate the power of neuro-GSTH in neuroscience by applying it to both simulated and biological neural datasets. First, we used neuro-GSTH to analyze neural oscillatory behavior in the Kuramoto model, revealing its capacity to track the synchronization of neural circuits as coupling strength increases. Next, we applied neuro-GSTH to neural recordings from the visual cortex of mice, where it accurately reconstructed visual stimulus patterns such as sinusoidal gratings. Neuro-GSTH-derived neural trajectories enabled precise classification of stimulus properties like spatial frequency and orientation, significantly outperforming traditional methods in capturing the underlying neural dynamics. These findings demonstrate that neuro-GSTH effectively identifies neural motifs—distinct patterns of spatiotemporal activity—providing a powerful tool for decoding brain activity across diverse tasks, sensory inputs, and neurological disorders. Neuro-GSTH thus offers new insights into neural communication and dynamics, advancing our ability to map and understand complex brain functions.
Neuro-GSTH: A Geometric Scattering and Persistent Homology Framework for Uncovering Spatiotemporal Signatures in Neural Activity
Dhananjay Bhaskar
Jessica Moore
Yanlei Zhang
Feng Gao
Bastian Rieck
Helen Pushkarskaya
Firas Khasawneh
Elizabeth Munch
Valentina Greco
Christopher Pittenger
DiffKillR: Killing and Recreating Diffeomorphisms for Cell Annotation in Dense Microscopy Images
Chen Liu
Danqi Liao
Alejandro Parada-Mayorga
Alejandro Ribeiro
Marcello DiStasio
The proliferation of digital microscopy images, driven by advances in automated whole slide scanning, presents significant opportunities for… (see more) biomedical research and clinical diagnostics. However, accurately annotating densely packed information in these images remains a major challenge. To address this, we introduce DiffKillR, a novel framework that reframes cell annotation as the combination of archetype matching and image registration tasks. DiffKillR employs two complementary neural networks: one that learns a diffeomorphism-invariant feature space for robust cell matching and another that computes the precise warping field between cells for annotation mapping. Using a small set of annotated archetypes, DiffKillR efficiently propagates annotations across large microscopy images, reducing the need for extensive manual labeling. More importantly, it is suitable for any type of pixel-level annotation. We will discuss the theoretical properties of DiffKillR and validate it on three microscopy tasks, demonstrating its advantages over existing supervised, semi-supervised, and unsupervised methods.
DiffKillR: Killing and Recreating Diffeomorphisms for Cell Annotation in Dense Microscopy Images
Chen Liu
Danqi Liao
Alejandro Parada-Mayorga
Alejandro Ribeiro
Marcello DiStasio
The proliferation of digital microscopy images, driven by advances in automated whole slide scanning, presents significant opportunities for… (see more) biomedical research and clinical diagnostics. However, accurately annotating densely packed information in these images remains a major challenge. To address this, we introduce DiffKillR, a novel framework that reframes cell annotation as the combination of archetype matching and image registration tasks. DiffKillR employs two complementary neural networks: one that learns a diffeomorphism-invariant feature space for robust cell matching and another that computes the precise warping field between cells for annotation mapping. Using a small set of annotated archetypes, DiffKillR efficiently propagates annotations across large microscopy images, reducing the need for extensive manual labeling. More importantly, it is suitable for any type of pixel-level annotation. We will discuss the theoretical properties of DiffKillR and validate it on three microscopy tasks, demonstrating its advantages over existing supervised, semi-supervised, and unsupervised methods.
ProtSCAPE: Mapping the landscape of protein conformations in molecular dynamics
Siddharth Viswanath
Dhananjay Bhaskar
David R. Johnson
João Felipe Rocha
Egbert Castro
Jackson Grady
Alex T. Grigas
Michael Perlmutter
Corey S. O'Hern
Understanding the dynamic nature of protein structures is essential for comprehending their biological functions. While significant progress… (see more) has been made in predicting static folded structures, modeling protein motions on microsecond to millisecond scales remains challenging. To address these challenges, we introduce a novel deep learning architecture, Protein Transformer with Scattering, Attention, and Positional Embedding (ProtSCAPE), which leverages the geometric scattering transform alongside transformer-based attention mechanisms to capture protein dynamics from molecular dynamics (MD) simulations. ProtSCAPE utilizes the multi-scale nature of the geometric scattering transform to extract features from protein structures conceptualized as graphs and integrates these features with dual attention structures that focus on residues and amino acid signals, generating latent representations of protein trajectories. Furthermore, ProtSCAPE incorporates a regression head to enforce temporally coherent latent representations.