Nous utilisons des témoins pour analyser le trafic et l’utilisation de notre site web, afin de personnaliser votre expérience. Vous pouvez désactiver ces technologies à tout moment, mais cela peut restreindre certaines fonctionnalités du site. Consultez notre Politique de protection de la vie privée pour en savoir plus.
Paramètre des cookies
Vous pouvez activer et désactiver les types de cookies que vous souhaitez accepter. Cependant certains choix que vous ferez pourraient affecter les services proposés sur nos sites (ex : suggestions, annonces personnalisées, etc.).
Cookies essentiels
Ces cookies sont nécessaires au fonctionnement du site et ne peuvent être désactivés. (Toujours actif)
Cookies analyse
Acceptez-vous l'utilisation de cookies pour mesurer l'audience de nos sites ?
Multimedia Player
Acceptez-vous l'utilisation de cookies pour afficher et vous permettre de regarder les contenus vidéo hébergés par nos partenaires (YouTube, etc.) ?
Publications
Steering CLIP's vision transformer with sparse autoencoders
While vision models are highly capable, their internal mechanisms remain poorly understood-- a challenge which sparse autoencoders (SAEs) ha… (voir plus)ve helped address in language, but which remains underexplored in vision. We address this gap by training SAEs on CLIP's vision transformer and uncover key differences between vision and language processing, including distinct sparsity patterns for SAEs trained across layers and token types. We then provide the first systematic analysis of the steerability of CLIP's vision transformer by introducing metrics to quantify how precisely SAE features can be steered to affect the model's output. We find that 10-15% of neurons and features are steerable, with SAEs providing thousands more steerable features than the base model. Through targeted suppression of SAE features, we then demonstrate improved performance on three vision disentanglement tasks (CelebA, Waterbirds, and typographic attacks), finding optimal disentanglement in middle model layers, and achieving state-of-the-art performance on defense against typographic attacks. We release our CLIP SAE models and code to support future research in vision transformer interpretability.
ABSTRACT Thraustochytrids, diverse marine unicellular protists encompassing over 10 recognised genera, are renowned for synthesising polyuns… (voir plus)aturated fatty acids (PUFAs), with content and composition varying substantially across genera. While PUFAs are known to be produced via PUFA synthase (PUFA‐S) and/or elongase/desaturase (ELO/DES) pathways, the distinctions in genes involved remain unexplored. This study analysed PUFA biosynthetic genes in 19 thraustochytrid strains across six genera, categorising them into four types. Type I exclusively utilises the ELO/DES pathway, Type II employs both PUFA‐S and complete ELO/DES pathways, while Types III and IV primarily rely on PUFA‐S, with Type III lacking the canonical Δ9 desaturase and Type IV missing most desaturase and elongase enzymes. Notably, the Δ9 desaturase and ATP‐citrate lyase (ACLY) are exclusive to Types I and II, while β‐carotene hydroxylase (CrtZ) is absent in these types. ACLY absence suggests alternative acetyl‐CoA supply pathways in Types III and IV, whereas CrtZ absence implies either a lack of specific xanthophylls or alternative biosynthetic pathways in Types I and II. Synteny analysis revealed conserved genomic organisation of PUFA biosynthetic genes, indicating a shared evolutionary trajectory. This study provides insights into the genetic diversity underlying PUFA biosynthesis in thraustochytrids, while proposing putative evolutionary pathways for the four lineages.
Developing reliable and generalizable deep learning systems for medical imaging faces significant obstacles due to spurious correlations, da… (voir plus)ta imbalances, and limited text annotations in datasets. Addressing these challenges requires architectures robust to the unique complexities posed by medical imaging data. The rapid advancements in vision-language foundation models within the natural image domain prompt the question of how they can be adapted for medical imaging tasks. In this work, we present PRISM, a framework that leverages foundation models to generate high-resolution, language-guided medical image counterfactuals using Stable Diffusion. Our approach demonstrates unprecedented precision in selectively modifying spurious correlations (the medical devices) and disease features, enabling the removal and addition of specific attributes while preserving other image characteristics. Through extensive evaluation, we show how PRISM advances counterfactual generation and enables the development of more robust downstream classifiers for clinically deployable solutions. To facilitate broader adoption and research, we make our code publicly available at https://github.com/Amarkr1/PRISM.