Dynamic Abstractions: Building the Next Generation of Cognitive Tools and Interfaces
Sangho Suh
Hai Dang
Ryan Yen
Josh M. Pollock
Rubaiat Habib Kazi
Hariharan Subramonyam
Jingyi Li
Nazmus Saquib
Arvind Satyanarayan
Effective Protein-Protein Interaction Exploration with PPIretrieval
Chenqing Hua
Connor W. Coley
Shuangjia Zheng
EnzymeFlow: Generating Reaction-specific Enzyme Catalytic Pockets through Flow Matching and Co-Evolutionary Dynamics
Chenqing Hua
Yong Liu
Dinghuai Zhang
Odin Zhang
Sitao Luan
Kevin K Yang
Shuangjia Zheng
Molphenix: A Multimodal Foundation Model for PhenoMolecular Retrieval
Philip Fradkin
Puria Azadi Moghadam
Karush Suri
Frederik Wenkel
Maciej Sypetkowski
Predicting molecular impact on cellular function is a core challenge in therapeutic design. Phenomic experiments, designed to capture cellu… (voir plus)lar morphology, utilize microscopy based techniques and demonstrate a high throughput solution for uncovering molecular impact on the cell. In this work, we learn a joint latent space between molecular structures and microscopy phenomic experiments, aligning paired samples with contrastive learning. Specifically, we study the problem of Contrastive PhenoMolecular Retrieval, which consists of zero-shot molecular structure identification conditioned on phenomic experiments. We assess challenges in multi-modal learning of phenomics and molecular modalities such as experimental batch effect, inactive molecule perturbations, and encoding perturbation concentration. We demonstrate improved multi-modal learner retrieval through (1) a uni-modal pre-trained phenomics model, (2) a novel inter sample similarity aware loss, and (3) models conditioned on a representation of molecular concentration. Following this recipe, we propose MolPhenix, a molecular phenomics model. MolPhenix leverages a pre-trained phenomics model to demonstrate significant performance gains across perturbation concentrations, molecular scaffolds, and activity thresholds. In particular, we demonstrate an 8.1
Neuro-GSTH: A Geometric Scattering and Persistent Homology Framework for Uncovering Spatiotemporal Signatures in Neural Activity
Dhananjay Bhaskar
Jessica Moore
Feng Gao
Bastian Rieck
Firas A. Khasawneh
Elizabeth Munch
Valentina Greco
Understanding how neurons communicate and coordinate their activity is essential for unraveling the brain’s complex functionality. To anal… (voir plus)yze the intricate spatiotemporal dynamics of neural signaling, we developed Geometric Scattering Trajectory Homology (neuro-GSTH), a novel framework that captures time-evolving neural signals and encodes them into low-dimensional representations. GSTH integrates geometric scattering transforms, which extract multiscale features from brain signals modeled on anatomical graphs, with t-PHATE, a manifold learning method that maps the temporal evolution of neural activity. Topological descriptors from computational homology are then applied to characterize the global structure of these neural trajectories, enabling the quantification and differentiation of spatiotemporal brain dynamics. We demonstrate the power of neuro-GSTH in neuroscience by applying it to both simulated and biological neural datasets. First, we used neuro-GSTH to analyze neural oscillatory behavior in the Kuramoto model, revealing its capacity to track the synchronization of neural circuits as coupling strength increases. Next, we applied neuro-GSTH to neural recordings from the visual cortex of mice, where it accurately reconstructed visual stimulus patterns such as sinusoidal gratings. Neuro-GSTH-derived neural trajectories enabled precise classification of stimulus properties like spatial frequency and orientation, significantly outperforming traditional methods in capturing the underlying neural dynamics. These findings demonstrate that neuro-GSTH effectively identifies neural motifs—distinct patterns of spatiotemporal activity—providing a powerful tool for decoding brain activity across diverse tasks, sensory inputs, and neurological disorders. Neuro-GSTH thus offers new insights into neural communication and dynamics, advancing our ability to map and understand complex brain functions.
Neuro-GSTH: A Geometric Scattering and Persistent Homology Framework for Uncovering Spatiotemporal Signatures in Neural Activity
Dhananjay Bhaskar
Jessica Moore
Yanlei Zhang
Feng Gao
Bastian Rieck
Helen Pushkarskaya
Firas Khasawneh
Elizabeth Munch
Valentina Greco
Christopher Pittenger
Seq-JEPA: Autoregressive Predictive Learning of Invariant-Equivariant World Models
Hafez Ghaemi
Eilif Benjamin Muller
Joint-embedding predictive architecture (JEPA) is a self-supervised learning (SSL) paradigm with the capacity of world modeling via action-c… (voir plus)onditioned prediction. Previously, JEPA world models have been shown to learn action-invariant or action-equivariant representations by predicting one view of an image from another. Unlike JEPA and similar SSL paradigms, animals, including humans, learn to recognize new objects through a sequence of active interactions. To introduce \emph{sequential} interactions, we propose \textit{seq-JEPA}, a novel SSL world model equipped with an autoregressive memory module. Seq-JEPA aggregates a sequence of action-conditioned observations to produce a global representation of them. This global representation, conditioned on the next action, is used to predict the latent representation of the next observation. We empirically show the advantages of this sequence of action-conditioned observations and examine our sequential modeling paradigm in two settings: (1) \emph{predictive learning across saccades}; a method inspired by the role of eye movements in embodied vision. This approach learns self-supervised image representations by processing a sequence of low-resolution visual patches sampled from image saliencies, without any hand-crafted data augmentations. (2) \emph{invariance-equivariance trade-off}; seq-JEPA's architecture results in automatic separation of invariant and equivariant representations, with the aggregated autoregressor outputs being mostly action-invariant and the encoder output being equivariant. This is in contrast with many equivariant SSL methods that expect a single representational space to contain both invariant and equivariant features, potentially creating a trade-off between the two. Empirically, seq-JEPA achieves competitive performance on both invariance and equivariance-related benchmarks compared to existing methods. Importantly, both invariance and equivariance-related downstream performances increase as the number of available observations increases.
Can Safety Fine-Tuning Be More Principled? Lessons Learned from Cybersecurity
David Williams-King
Linh Le
As LLMs develop increasingly advanced capabilities, there is an increased need to minimize the harm that could be caused to society by certa… (voir plus)in model outputs; hence, most LLMs have safety guardrails added, for example via fine-tuning. In this paper, we argue the position that current safety fine-tuning is very similar to a traditional cat-and-mouse game (or arms race) between attackers and defenders in cybersecurity. Model jailbreaks and attacks are patched with bandaids to target the specific attack mechanism, but many similar attack vectors might remain. When defenders are not proactively coming up with principled mechanisms, it becomes very easy for attackers to sidestep any new defenses. We show how current defenses are insufficient to prevent new adversarial jailbreak attacks, reward hacking, and loss of control problems. In order to learn from past mistakes in cybersecurity, we draw analogies with historical examples and develop lessons learned that can be applied to LLM safety. These arguments support the need for new and more principled approaches to designing safe models, which are architected for security from the beginning. We describe several such approaches from the AI literature.
Can Safety Fine-Tuning Be More Principled? Lessons Learned from Cybersecurity
David Williams-King
Linh Le
As LLMs develop increasingly advanced capabilities, there is an increased need to minimize the harm that could be caused to society by certa… (voir plus)in model outputs; hence, most LLMs have safety guardrails added, for example via fine-tuning. In this paper, we argue the position that current safety fine-tuning is very similar to a traditional cat-and-mouse game (or arms race) between attackers and defenders in cybersecurity. Model jailbreaks and attacks are patched with bandaids to target the specific attack mechanism, but many similar attack vectors might remain. When defenders are not proactively coming up with principled mechanisms, it becomes very easy for attackers to sidestep any new defenses. We show how current defenses are insufficient to prevent new adversarial jailbreak attacks, reward hacking, and loss of control problems. In order to learn from past mistakes in cybersecurity, we draw analogies with historical examples and develop lessons learned that can be applied to LLM safety. These arguments support the need for new and more principled approaches to designing safe models, which are architected for security from the beginning. We describe several such approaches from the AI literature.
Epistemic Integrity in Large Language Models
Bijean Ghafouri
Shahrad Mohammadzadeh
James Zhou
Pratheeksha Nair
Jacob-Junqi Tian
Mayank Goel
Kellin Pelrine
Large language models are increasingly relied upon as sources of information, but their propensity for generating false or misleading statem… (voir plus)ents with high confidence poses risks for users and society. In this paper, we confront the critical problem of epistemic miscalibration—where a model's linguistic assertiveness fails to reflect its true internal certainty. We introduce a new human-labeled dataset and a novel method for measuring the linguistic assertiveness of Large Language Models which cuts error rates by over 50% relative to previous benchmarks. Validated across multiple datasets, our method reveals a stark misalignment between how confidently models linguistically present information and their actual accuracy. Further human evaluations confirm the severity of this miscalibration. This evidence underscores the urgent risk of the overstated certainty Large Language Models hold which may mislead users on a massive scale. Our framework provides a crucial step forward in diagnosing and correcting this miscalibration, offering a path to safer and more trustworthy AI across domains.
Hallucination Detox: Sensitive Neuron Dropout (SeND) for Large Language Model Training
Shahrad Mohammadzadeh
Juan David Guerra
As large language models (LLMs) become increasingly deployed across various industries, concerns regarding their reliability, particularly d… (voir plus)ue to hallucinations-outputs that are factually inaccurate or irrelevant to user input-have grown. Our research investigates the relationship between the training process and the emergence of hallucinations to address a key gap in existing research that focuses primarily on post hoc detection and mitigation strategies. Using models from the Pythia suite (70M-12B parameters) and several hallucination detection metrics, we analyze hallucination trends throughout training and explore LLM internal dynamics. We introduce SEnsitive Neuron Dropout (SeND), a novel training protocol designed to mitigate hallucinations by reducing variance during training. SeND achieves this by deterministically dropping neurons with significant variability on a dataset, referred to as Sensitive Neurons. In addition, we develop an unsupervised hallucination detection metric, Efficient EigenScore (EES), which approximates the traditional EigenScore in 2x speed. This efficient metric is integrated into our protocol, allowing SeND to be both computationally scalable and effective at reducing hallucinations. Our empirical evaluation demonstrates that our approach improves LLM reliability at test time by up to 40% compared to normal training while also providing an efficient method to improve factual accuracy when adapting LLMs to domains such as Wikipedia and Medical datasets.
Identifying and Addressing Delusions for Target-Directed Decision-Making
Harry Zhao
Mingde Zhao
Tristan Sylvain
Romain Laroche
We are interested in target-directed agents, which produce targets during decision-time planning, to guide their behaviors and achieve bette… (voir plus)r generalization during evaluation. Improper training of these agents can result in delusions: the agent may come to hold false beliefs about the targets, which cannot be properly rejected, leading to unwanted behaviors and damaging out-of-distribution generalization. We identify different types of delusions by using intuitive examples in carefully controlled environments, and investigate their causes. We demonstrate how delusions can be addressed for agents trained by hindsight relabeling, a mainstream approach in for training target-directed RL agents. We validate empirically the effectiveness of the proposed solutions in correcting delusional behaviors and improving out-of-distribution generalization.