Publications

Why do LLMs attend to the first token?
Federico Barbero
'Alvaro Arroyo
Xiangming Gu
Christos Perivolaropoulos
Michael M. Bronstein
Petar Veličković
NoProp: Training Neural Networks without Full Back-propagation or Full Forward-propagation
Qinyu Li
Yee Whye Teh
The canonical deep learning approach for learning requires computing a gradient term at each block by back-propagating the error signal from… (voir plus) the output towards each learnable parameter. Given the stacked structure of neural networks, where each block builds on the representation of the block below, this approach leads to hierarchical representations. More abstract features live on the top blocks of the model, while features on lower blocks are expected to be less abstract. In contrast to this, we introduce a new learning method named NoProp, which does not rely on either forward or backwards propagation across the entire network. Instead, NoProp takes inspiration from diffusion and flow matching methods, where each block independently learns to denoise a noisy target using only local targets and back-propagation within the block. We believe this work takes a first step towards introducing a new family of learning methods that does not learn hierarchical representations -- at least not in the usual sense. NoProp needs to fix the representation at each block beforehand to a noised version of the target, learning a local denoising process that can then be exploited at inference. We demonstrate the effectiveness of our method on MNIST, CIFAR-10, and CIFAR-100 image classification benchmarks. Our results show that NoProp is a viable learning algorithm, is easy to use and computationally efficient. By departing from the traditional learning paradigm which requires back-propagating a global error signal, NoProp alters how credit assignment is done within the network, enabling more efficient distributed learning as well as potentially impacting other characteristics of the learning process.
NoProp: Training Neural Networks without Back-propagation or Forward-propagation
Qinyu Li
Yee Whye Teh
NoProp: Training Neural Networks without Back-propagation or Forward-propagation
Qinyu Li
Yee Whye Teh
Language-Guided Trajectory Traversal in Disentangled Stable Diffusion Latent Space for Factorized Medical Image Generation
Language-Guided Trajectory Traversal in Disentangled Stable Diffusion Latent Space for Factorized Medical Image Generation
Text-to-image diffusion models have demonstrated a remarkable ability to generate photorealistic images from natural language prompts. These… (voir plus) high-resolution, language-guided synthesized images are essential for the explainability of disease or exploring causal relationships. However, their potential for disentangling and controlling latent factors of variation in specialized domains like medical imaging remains under-explored. In this work, we present the first investigation of the power of pre-trained vision-language foundation models, once fine-tuned on medical image datasets, to perform latent disentanglement for factorized medical image generation and interpolation. Through extensive experiments on chest X-ray and skin datasets, we illustrate that fine-tuned, language-guided Stable Diffusion inherently learns to factorize key attributes for image generation, such as the patient's anatomical structures or disease diagnostic features. We devise a framework to identify, isolate, and manipulate key attributes through latent space trajectory traversal of generative models, facilitating precise control over medical image synthesis.
Language-Guided Trajectory Traversal in Disentangled Stable Diffusion Latent Space for Factorized Medical Image Generation
Leveraging Vision-Language Foundation Models to Reveal Hidden Image-Attribute Relationships in Medical Imaging
Vision-language foundation models (VLMs) have shown impressive performance in guiding image generation through text, with emerging applicati… (voir plus)ons in medical imaging. In this work, we are the first to investigate the question: 'Can fine-tuned foundation models help identify critical, and possibly unknown, data properties?' By evaluating our proposed method on a chest x-ray dataset, we show that these models can generate high-resolution, precisely edited images compared to methods that rely on Structural Causal Models (SCMs) according to numerous metrics. For the first time, we demonstrate that fine-tuned VLMs can reveal hidden data relationships that were previously obscured due to available metadata granularity and model capacity limitations. Our experiments demonstrate both the potential of these models to reveal underlying dataset properties while also exposing the limitations of fine-tuned VLMs for accurate image editing and susceptibility to biases and spurious correlations.
Leveraging Vision-Language Foundation Models to Reveal Hidden Image-Attribute Relationships in Medical Imaging
Steering CLIP's vision transformer with sparse autoencoders
Ethan Goldfarb
Lorenz Hufe
Yossi Gandelsman
Robert Graham
Wojciech Samek
While vision models are highly capable, their internal mechanisms remain poorly understood-- a challenge which sparse autoencoders (SAEs) ha… (voir plus)ve helped address in language, but which remains underexplored in vision. We address this gap by training SAEs on CLIP's vision transformer and uncover key differences between vision and language processing, including distinct sparsity patterns for SAEs trained across layers and token types. We then provide the first systematic analysis of the steerability of CLIP's vision transformer by introducing metrics to quantify how precisely SAE features can be steered to affect the model's output. We find that 10-15% of neurons and features are steerable, with SAEs providing thousands more steerable features than the base model. Through targeted suppression of SAE features, we then demonstrate improved performance on three vision disentanglement tasks (CelebA, Waterbirds, and typographic attacks), finding optimal disentanglement in middle model layers, and achieving state-of-the-art performance on defense against typographic attacks. We release our CLIP SAE models and code to support future research in vision transformer interpretability.
Bridging biodiversity and ecosystem services through useful plant species
Nina Obiar
Isaac Eckert
Janelle Baker
Daniel Moerman
Genetic Analysis of Polyunsaturated Fatty Acids Biosynthesis Pathway Determines Four Distinct Thraustochytrid Types
Sou‐Yu Cheng
Yi‐Jing Chen
Hsin‐Yang Chang
Ming‐Der Huang
ABSTRACT Thraustochytrids, diverse marine unicellular protists encompassing over 10 recognised genera, are renowned for synthesising polyuns… (voir plus)aturated fatty acids (PUFAs), with content and composition varying substantially across genera. While PUFAs are known to be produced via PUFA synthase (PUFA‐S) and/or elongase/desaturase (ELO/DES) pathways, the distinctions in genes involved remain unexplored. This study analysed PUFA biosynthetic genes in 19 thraustochytrid strains across six genera, categorising them into four types. Type I exclusively utilises the ELO/DES pathway, Type II employs both PUFA‐S and complete ELO/DES pathways, while Types III and IV primarily rely on PUFA‐S, with Type III lacking the canonical Δ9 desaturase and Type IV missing most desaturase and elongase enzymes. Notably, the Δ9 desaturase and ATP‐citrate lyase (ACLY) are exclusive to Types I and II, while β‐carotene hydroxylase (CrtZ) is absent in these types. ACLY absence suggests alternative acetyl‐CoA supply pathways in Types III and IV, whereas CrtZ absence implies either a lack of specific xanthophylls or alternative biosynthetic pathways in Types I and II. Synteny analysis revealed conserved genomic organisation of PUFA biosynthetic genes, indicating a shared evolutionary trajectory. This study provides insights into the genetic diversity underlying PUFA biosynthesis in thraustochytrids, while proposing putative evolutionary pathways for the four lineages.