Portrait of Smita Krishnaswamy

Smita Krishnaswamy

Affiliate Member
Associate Professor, Yale University
Université de Montréal
Yale
Research Topics
AI in Health
Brain-computer Interfaces
Cognitive Science
Computational Biology
Computational Neuroscience
Data Geometry
Data Science
Data Sparsity
Deep Learning
Dynamical Systems
Generative Models
Geometric Deep Learning
Graph Neural Networks
Information Theory
Manifold Learning
Molecular Modeling
Representation Learning
Spectral Learning

Biography

Our lab works on developing foundational mathematical machine learning and deep learning methods that incorporate graph-based learning, signal processing, information theory, data geometry and topology, optimal transport and dynamics modeling that are capable of exploratory analysis, scientific inference, interpretation and hypothesis generation big biomedical datasets ranging from single-cell data, to brain imaging, to molecular structural datasets arising from neuroscience, psychology, stem cell biology, cancer biology, healthcare, and biochemistry. Our works have been instrumental in dynamic trajectory learning from static snapshot data, data denoising, visualization, network inference, molecular structure modeling and more.

Current Students

Collaborating researcher - Yale University
Principal supervisor :

Publications

MIOFlow 2.0: A unified framework for inferring cellular stochastic dynamics from single cell and spatial transcriptomics data
Xingzhi Sun
João Felipe Rocha
Brett Phelan
Dhananjay Bhaskar
Yanlei Zhang
D. S. Magruder
Ke Xu
Oluwadamilola Fasina
Mark Gerstein
Natalia Ivanova
Christine L. Chaffer
Understanding cellular trajectories via time-resolved single-cell transcriptomics is vital for studying development, regeneration, and disea… (see more)se. A key challenge is inferring continuous trajectories from discrete snapshots. Biological complexity stems from stochastic cell fate decisions, temporal proliferation changes, and spatial environmental influences. Current methods often use deterministic interpolations treating cells in isolation, failing to capture the probabilistic branching, population shifts, and niche-dependent signaling driving real biological processes. We introduce Manifold Interpolating Optimal-Transport Flow (MIOFlow) 2.0. This framework learns biologically informed cellular trajectories by integrating manifold learning, optimal transport, and neural differential equations. It models three core processes: (1) stochasticity and branching via Neural Stochastic Differential Equations; (2) non-conservative population changes using a learned growth-rate model initialized with unbalanced optimal transport; and (3) environmental influence through a joint latent space unifying gene expression with spatial features like local cell type composition and signaling. By operating in a PHATE-distance matching autoencoder latent space, MIOFlow 2.0 ensures trajectories respect the data's intrinsic geometry. Empirical comparisons show expressive trajectory learning via neural differential equations outperforms existing generative models, including simulation-free flow matching. Validated on synthetic datasets, embryoid body differentiation, and spatially resolved axolotl brain regeneration, MIOFlow 2.0 improves trajectory accuracy and reveals hidden drivers of cellular transitions, like specific signaling niches. MIOFlow 2.0 thus bridges single-cell and spatial transcriptomics to uncover tissue-scale trajectories.
HypRAG: Hyperbolic Dense Retrieval for Retrieval Augmented Generation
Hiren Madhu
Ngoc Bui
Ali Maatouk
Leandros Tassiulas
Menglin Yang 0001
Sukanta Ganguly
Kiran Srinivasan
Rex Ying
Embedding geometry plays a fundamental role in retrieval quality, yet dense retrievers for retrieval-augmented generation (RAG) remain large… (see more)ly confined to Euclidean space. However, natural language exhibits hierarchical structure from broad topics to specific entities that Euclidean embeddings fail to preserve, causing semantically distant documents to appear spuriously similar and increasing hallucination risk. To address these limitations, we introduce hyperbolic dense retrieval, developing two model variants in the Lorentz model of hyperbolic space: HyTE-FH, a fully hyperbolic transformer, and HyTE-H, a hybrid architecture projecting pre-trained Euclidean embeddings into hyperbolic space. To prevent representational collapse during sequence aggregation, we introduce the Outward Einstein Midpoint, a geometry-aware pooling operator that provably preserves hierarchical structure. On MTEB, HyTE-FH outperforms equivalent Euclidean baselines, while on RAGBench, HyTE-H achieves up to 29% gains over Euclidean baselines in context relevance and answer relevance using substantially smaller models than current state-of-the-art retrievers. Our analysis also reveals that hyperbolic representations encode document specificity through norm-based separation, with over 20% radial increase from general to specific concepts, a property absent in Euclidean embeddings, underscoring the critical role of geometric inductive bias in faithful RAG systems.
Dispersion Loss Counteracts Embedding Condensation and Improves Generalization in Small Language Models
Chen Liu
Xingzhi Sun
Xi Xiao
Alexandre Van Tassel
Ke Xu
Kristof Reimann
Danqi Liao
Mark B. Gerstein
Tianyang Wang
Xiao Wang
Large language models (LLMs) achieve remarkable performance through ever-increasing parameter counts, but scaling incurs steep computational… (see more) costs. To better understand LLM scaling, we study representational differences between LLMs and their smaller counterparts, with the goal of replicating the representational qualities of larger models in the smaller models. We observe a geometric phenomenon which we term
Self-Supervised Visual Prompting for Cross-Domain Road Damage Detection
Xi Xiao
Zhuxuanzi Wang
Mingqiao Mo
Chen Liu
Chenrui Ma
Yanshu Li
Xiao Wang
Tianyang Wang
The deployment of automated pavement defect detection is often hindered by poor cross-domain generalization. Supervised detectors achieve st… (see more)rong in-domain accuracy but require costly re-annotation for new environments, while standard self-supervised methods capture generic features and remain vulnerable to domain shift. We propose \ours, a self-supervised framework that \emph{visually probes} target domains without labels. \ours introduces a Self-supervised Prompt Enhancement Module (SPEM), which derives defect-aware prompts from unlabeled target data to guide a frozen ViT backbone, and a Domain-Aware Prompt Alignment (DAPA) objective, which aligns prompt-conditioned source and target representations. Experiments on four challenging benchmarks show that \ours consistently outperforms strong supervised, self-supervised, and adaptation baselines, achieving robust zero-shot transfer, improved resilience to domain variations, and high data efficiency in few-shot adaptation. These results highlight self-supervised prompting as a practical direction for building scalable and adaptive visual inspection systems. Source code is publicly available: https://github.com/xixiaouab/PROBE/tree/main
Graph topological property recovery with heat and wave dynamics-based features on graphs
Dhananjay Bhaskar
Yanlei Zhang
Charles Xu
Xingzhi Sun
Oluwadamilola Fasina
Maximilian Nickel
Michael Perlmutter
Neural FIM: Bridging Statistical Manifolds and Generative Modeling through Fisher Geometry
Yanlei Zhang
Edward De Brouwer
Danqi Liao
Oluwadamilola Fasina
Ricky T. Q. Chen
Maximilian Nickel
Ian Adelstein
While data diffusion-based embeddings are widely used in unsupervised learning to reveal the intrinsic geometry of data, they are fundamenta… (see more)lly constrained by their discrete nature and inability to generalize beyond training points. This limitation ob
RNAGenScape: Property-guided Optimization and Interpolation of mRNA Sequences with Manifold Langevin Dynamics
Danqi Liao
Chen Liu
Xingzhi Sun
Di'e Tang
Haochen Wang
Scott E. Youlten
Srikar Krishna Gopinath
Haejeong Lee
Ethan C. Strayer
Antonio J. Giraldez
CTR-LoRA: Curvature-Aware and Trust-Region Guided Low-Rank Adaptation for Large Language Models
Zhuxuanzi Wang
Mingqiao Mo
Xi Xiao
Chen Liu
Chenrui Ma
Yunbei Zhang
Xiao Wang
Tianyang Wang
Parameter-efficient fine-tuning (PEFT) has become the standard approach for adapting large language models under limited compute and memory … (see more)budgets. Although previous methods improve efficiency through low-rank updates, quantization, or heuristic budget reallocation, they often decouple the allocation of capacity from the way updates evolve during training. In this work, we introduce CTR-LoRA, a framework guided by curvature trust region that integrates rank scheduling with stability-aware optimization. CTR-LoRA allocates parameters based on marginal utility derived from lightweight second-order proxies and constrains updates using a Fisher/Hessian-metric trust region. Experiments on multiple open-source backbones (7B-13B), evaluated on both in-distribution and out-of-distribution benchmarks, show consistent improvements over strong PEFT baselines. In addition to increased accuracy, CTR-LoRA enhances training stability, reduces memory requirements, and achieves higher throughput, positioning it on the Pareto frontier of performance and efficiency. These results highlight a principled path toward more robust and deployable PEFT.
Equivariant Geometric Scattering Networks via Vector Diffusion Wavelets
David R. Johnson
Rishabh Anand
Michael Perlmutter
VDW-GNNs: Vector diffusion wavelets for geometric graph neural networks
David R. Johnson
Alexander Sietsema
Rishabh Anand
Deanna Needell
Michael Perlmutter
We introduce vector diffusion wavelets (VDWs), a novel family of wavelets inspired by the vector diffusion maps algorithm that was introduce… (see more)d to analyze data lying in the tangent bundle of a Riemannian manifold. We show that these wavelets may be effectively incorporated into a family of geometric graph neural networks, which we refer to as VDW-GNNs. We demonstrate that such networks are effective on synthetic point cloud data, as well as on real-world data derived from wind-field measurements and neural activity data. Theoretically, we prove that these new wavelets have desirable frame theoretic properties, similar to traditional diffusion wavelets. Additionally, we prove that these wavelets have desirable symmetries with respect to rotations and translations.
HEIST: A Graph Foundation Model for Spatial Transcriptomics and Proteomics Data
Hiren Madhu
João Felipe Rocha
Tinglin Huang
Rex Ying
Measure Before You Look: Grounding Embeddings Through Manifold Metrics