Nous utilisons des témoins pour analyser le trafic et l’utilisation de notre site web, afin de personnaliser votre expérience. Vous pouvez désactiver ces technologies à tout moment, mais cela peut restreindre certaines fonctionnalités du site. Consultez notre Politique de protection de la vie privée pour en savoir plus.
Paramètre des cookies
Vous pouvez activer et désactiver les types de cookies que vous souhaitez accepter. Cependant certains choix que vous ferez pourraient affecter les services proposés sur nos sites (ex : suggestions, annonces personnalisées, etc.).
Cookies essentiels
Ces cookies sont nécessaires au fonctionnement du site et ne peuvent être désactivés. (Toujours actif)
Cookies analyse
Acceptez-vous l'utilisation de cookies pour mesurer l'audience de nos sites ?
Multimedia Player
Acceptez-vous l'utilisation de cookies pour afficher et vous permettre de regarder les contenus vidéo hébergés par nos partenaires (YouTube, etc.) ?
Publications
UI-Vision: A Desktop-centric GUI Benchmark for Visual Perception and Interaction
Developing reliable and generalizable deep learning systems for medical imaging faces significant obstacles due to spurious correlations, da… (voir plus)ta imbalances, and limited text annotations in datasets. Addressing these challenges requires architectures robust to the unique complexities posed by medical imaging data. The rapid advancements in vision-language foundation models within the natural image domain prompt the question of how they can be adapted for medical imaging tasks. In this work, we present PRISM, a framework that leverages foundation models to generate high-resolution, language-guided medical image counterfactuals using Stable Diffusion. Our approach demonstrates unprecedented precision in selectively modifying spurious correlations (the medical devices) and disease features, enabling the removal and addition of specific attributes while preserving other image characteristics. Through extensive evaluation, we show how PRISM advances counterfactual generation and enables the development of more robust downstream classifiers for clinically deployable solutions. To facilitate broader adoption and research, we make our code publicly available at https://github.com/Amarkr1/PRISM.
A key challenge in AI alignment is guiding large language models (LLMs) to follow desired behaviors at test time. Activation steering, which… (voir plus) modifies internal model activations during inference, offers a potential solution. However, prior work in dense activation spaces struggles with superposition, wherein multiple features become entangled, limiting interpretability and precise control. In contrast, sparse representations provide an untapped opportunity for more interpretable behavior modulation. In this work, we introduce sparse activation steering (SAS), a method that leverages sparse autoencoders (SAEs) to steer LLM behavior in sparse spaces. By isolating behavior-specific features through a contrastive prompt-pairing approach, we define a set of features that can selectively reinforce or suppress behaviors. Experiments on Gemma 2 LLMs show that SAS vectors enable nuanced behavioral modulation and finer-grained control. Furthermore, scaling SAEs improves monosemanticity of SAS vectors, suggesting more reliable and interpretable interventions.
We introduce the Local Intersectional Visual Spaces (LIVS) dataset, a benchmark for multi-criteria alignment of text-to-image (T2I) models i… (voir plus)n inclusive urban planning. Developed through a two-year participatory process with 30 community organizations, LIVS encodes diverse spatial preferences across 634 initial concepts, consolidated into six core criteria: Accessibility, Safety, Comfort, Invitingness, Inclusivity, and Diversity, through 37,710 pairwise comparisons. Using Direct Preference Optimization (DPO) to fine-tune Stable Diffusion XL, we observed a measurable increase in alignment with community preferences, though a significant proportion of neutral ratings highlights the complexity of modeling intersectional needs. Additionally, as annotation volume increases, accuracy shifts further toward the DPO-tuned model, suggesting that larger-scale preference data enhances fine-tuning effectiveness. LIVS underscores the necessity of integrating context-specific, stakeholder-driven criteria into generative modeling and provides a resource for evaluating AI alignment methodologies across diverse socio-spatial contexts.
Uncertainty is a fundamental aspect of the natural environment, requiring the brain to infer and integrate noisy signals to guide behavior e… (voir plus)ffectively. Sampling-based inference has been proposed as a mechanism for dealing with uncertainty, particularly in early sensory processing. However, it is unclear how to reconcile sampling-based methods with operational principles of higher-order brain areas, such as attractor dynamics of persistent neural representations. In this study, we present a spiking neural network model for the head-direction (HD) system that combines sampling-based inference with attractor dynamics. To achieve this, we derive the required spiking neural network dynamics and interactions to perform sampling from a large family of probability distributions—including variables encoded with Poisson noise. We then propose a method that allows the network to update its estimate of the current head direction by integrating angular velocity samples—derived from noisy inputs—with a pull towards a circular manifold, thereby maintaining consistent attractor dynamics. This model makes specific, testable predictions about the HD system that can be examined in future neurophysiological experiments: it predicts correlated subthreshold voltage fluctuations; distinctive short- and long-term firing correlations among neurons; and characteristic statistics of the movement of the neural activity “bump” representing the head direction. Overall, our approach extends previous theories on probabilistic sampling with spiking neurons, offers a novel perspective on the computations responsible for orientation and navigation, and supports the hypothesis that sampling-based methods can be combined with attractor dynamics to provide a viable framework for studying neural dynamics across the brain.
Small-animal diffusion MRI (dMRI) has been used for methodological development and validation, characterizing the biological basis of diffus… (voir plus)ion phenomena, and comparative anatomy. The steps from animal setup and monitoring, to acquisition, analysis, and interpretation are complex, with many decisions that may ultimately affect what questions can be answered using the resultant data. This work aims to present selected considerations and recommendations from the diffusion community on best practices for preclinical dMRI of in vivo animals. We describe the general considerations and foundational knowledge that must be considered when designing experiments. We briefly describe differences in animal species and disease models and discuss why some may be more or less appropriate for different studies. We, then, give recommendations for in vivo acquisition protocols, including decisions on hardware, animal preparation, and imaging sequences, followed by advice for data processing including preprocessing, model-fitting, and tractography. Finally, we provide an online resource that lists publicly available preclinical dMRI datasets and software packages to promote responsible and reproducible research. In each section, we attempt to provide guides and recommendations, but also highlight areas for which no guidelines exist (and why), and where future work should focus. Although we mainly cover the central nervous system (on which most preclinical dMRI studies are focused), we also provide, where possible and applicable, recommendations for other organs of interest. An overarching goal is to enhance the rigor and reproducibility of small animal dMRI acquisitions and analyses, and thereby advance biomedical knowledge.
Considerations and recommendations from the <scp>ISMRM</scp> diffusion study group for preclinical diffusion <scp>MRI</scp>: Part 1: In vivo small‐animal imaging