Perspectives sur l’IA pour les responsables des politiques
Co-dirigé par Mila et le CIFAR, ce programme met en relations les responsables des politiques avec un groupe d’expert·e·s en IA pour discuter librement de leurs défis en matière d'IA et de politique.
Joignez-vous à nous le 17 avril pour notre conférence annuelle d'une journée sur la recherche en IA, mettant en vedette les chercheur·euse·s de Mila et des conférencier·ère·s de renom, au profit de Centraide du Grand Montréal.
Développement du groupe d'experts de l'ONU sur l'IA
Mila a récemment réuni des expert·e·s de renom pour discuter de la création d’un groupe indépendant sur l’IA pour l’ONU. Ce document propose des recommandations clés pour assurer son indépendance et sa légitimité.
Nous utilisons des témoins pour analyser le trafic et l’utilisation de notre site web, afin de personnaliser votre expérience. Vous pouvez désactiver ces technologies à tout moment, mais cela peut restreindre certaines fonctionnalités du site. Consultez notre Politique de protection de la vie privée pour en savoir plus.
Paramètre des cookies
Vous pouvez activer et désactiver les types de cookies que vous souhaitez accepter. Cependant certains choix que vous ferez pourraient affecter les services proposés sur nos sites (ex : suggestions, annonces personnalisées, etc.).
Cookies essentiels
Ces cookies sont nécessaires au fonctionnement du site et ne peuvent être désactivés. (Toujours actif)
Cookies analyse
Acceptez-vous l'utilisation de cookies pour mesurer l'audience de nos sites ?
Multimedia Player
Acceptez-vous l'utilisation de cookies pour afficher et vous permettre de regarder les contenus vidéo hébergés par nos partenaires (YouTube, etc.) ?
Publications
The Non-Local Model Merging Problem: Permutation Symmetries and Variance Collapse
Model merging aims to efficiently combine the weights of multiple expert models, each trained on a specific task, into a single multi-task m… (voir plus)odel, with strong performance across all tasks. When applied to all but the last layer of weights, existing methods -- such as Task Arithmetic, TIES-merging, and TALL mask merging -- work well to combine expert models obtained by fine-tuning a common foundation model, operating within a"local"neighborhood of the foundation model. This work explores the more challenging scenario of"non-local"merging, which we find arises when an expert model changes significantly during pretraining or where the expert models do not even share a common foundation model. We observe that standard merging techniques often fail to generalize effectively in this non-local setting, even when accounting for permutation symmetries using standard techniques. We identify that this failure is, in part, due to"variance collapse", a phenomenon identified also in the setting of linear mode connectivity by Jordan et al. (2023). To address this, we propose a multi-task technique to re-scale and shift the output activations of the merged model for each task, aligning its output statistics with those of the corresponding task-specific expert models. Our experiments demonstrate that this correction significantly improves the performance of various model merging approaches in non-local settings, providing a strong baseline for future research on this problem.
The intricate structural and functional architecture of the brain enables a wide range of cognitive processes ranging from perception and ac… (voir plus)tion to higher-order abstract thinking. Despite important progress, the relationship between the brain’s structural and functional properties is not yet fully established. In particular, the way the brain’s anatomy shapes its electrophysiological dynamics remains elusive. The electroencephalography (EEG) activity recorded during naturalistic tasks is thought to exhibit patterns of coupling with the underlying brain structure that vary as a function of behavior. Yet these patterns have not yet been sufficiently quantified. We address this gap by jointly examining individual Diffusion-Weighted Imaging (DWI) scans and continuous EEG recorded during video-watching and resting state, using a Graph Signal Processing (GSP) framework. By decomposing the structural graph into Eigenmodes and expressing the EEG activity as an extension of anatomy, GSP provides a way to quantify the structure-function coupling. We elucidate how the structure shapes function during naturalistic tasks such as movie-watching and how this association is modulated by tasks. We quantify the coupling relationship in a region-, time-, frequency-resolved manner. First of all, our findings indicate that the EEG activity in the sensorimotor cortex is strongly coupled with brain structure, while the activity in higher-order systems is less constrained by anatomy, i.e., shows more flexibility. In addition, we found that watching videos was associated with stronger structure-function coupling in the sensorimotor cortex, as compared to resting-state data. Second, time-resolved analysis revealed that the unimodal systems undergo minimal temporal fluctuation in structure-function association, and the transmodal system displays highest temporal fluctuations, with the exception of PCC seeing low fluctuations. Lastly, our frequency-resolved analysis revealed a consistent topography across different EEG rhythms, suggesting a similar relationship with the anatomical structure across frequency bands. Together, this unprecedented characterization of the link between structure and function using continuous EEG during naturalistic behavior underscores the role of anatomy in shaping ongoing cognitive processes. Taken together, by combining the temporal and spectral resolution of EEG and the methodological advantages of GSP, our work sheds new light onto the anatomo-functional organization of the brain.
One of the critical aspects of assistive robotics is to provide a control system of a high-dimensional robot from a low-dimensional user inp… (voir plus)ut (i.e. a 2D joystick). Data-driven teleoperation seeks to provide an intuitive user interface called an action map to map the low dimensional input to robot velocities from human demonstrations. Action maps are machine learning models trained on robotic demonstration data to map user input directly to desired movements as opposed to aspects of robot pose ("move to cup or pour content" vs. "move along x- or y-axis"). Many works have investigated nonlinear action maps with multi-layer perceptrons, but recent work suggests that local-linear neural approximations provide better control of the system. However, local linear models assume actions exist on a linear subspace and may not capture nuanced motions in training data. In this work, we hypothesize that local-linear neural networks are effective because they make the action map odd w.r.t. the user input, enhancing the intuitiveness of the controller. Based on this assumption, we propose two nonlinear means of encoding odd behavior that do not constrain the action map to a local linear function. However, our analysis reveals that these models effectively behave like local linear models for relevant mappings between user joysticks and robot movements. We support this claim in simulation, and show on a realworld use case that there is no statistical benefit of using non-linear maps, according to the users experience. These negative results suggest that further investigation into model architectures beyond local linear models may offer diminishing returns for improving user experience in data-driven teleoperation systems.
2024-10-14
IEEE/RJS International Conference on Intelligent RObots and Systems (publié)
One of the critical aspects of assistive robotics is to provide a control system of a high-dimensional robot from a low-dimensional user inp… (voir plus)ut (i.e. a 2D joystick). Data-driven teleoperation seeks to provide an intuitive user interface called an action map to map the low dimensional input to robot velocities from human demonstrations. Action maps are machine learning models trained on robotic demonstration data to map user input directly to desired movements as opposed to aspects of robot pose ("move to cup or pour content" vs. "move along x- or y-axis"). Many works have investigated nonlinear action maps with multi-layer perceptrons, but recent work suggests that local-linear neural approximations provide better control of the system. However, local linear models assume actions exist on a linear subspace and may not capture nuanced motions in training data. In this work, we hypothesize that local-linear neural networks are effective because they make the action map odd w.r.t. the user input, enhancing the intuitiveness of the controller. Based on this assumption, we propose two nonlinear means of encoding odd behavior that do not constrain the action map to a local linear function. However, our analysis reveals that these models effectively behave like local linear models for relevant mappings between user joysticks and robot movements. We support this claim in simulation, and show on a realworld use case that there is no statistical benefit of using non-linear maps, according to the users experience. These negative results suggest that further investigation into model architectures beyond local linear models may offer diminishing returns for improving user experience in data-driven teleoperation systems.
2024-10-14
IEEE/RJS International Conference on Intelligent RObots and Systems (publié)
In human cognition theory, human thinking is governed by two systems: the fast and intuitive System 1 and the slower but more deliberative S… (voir plus)ystem 2. Recent studies have shown that incorporating System 2 process into Transformers including large language models (LLMs), significantly enhances their reasoning capabilities. Nevertheless, models that purely resemble System 2 thinking require substantially higher computational costs and are much slower to respond. To address this challenge, we present Dualformer, a single Transformer model that seamlessly integrates both the fast and slow reasoning modes. Dualformer is obtained by training on data with randomized reasoning traces, where different parts of the traces are dropped during training. The dropping strategies are specifically tailored according to the trace structure, analogous to analyzing our thinking process and creating shortcuts with patterns. At inference time, our model can be configured to output only the solutions (fast mode) or both the reasoning chain and the final solution (slow mode), or automatically decide which mode to engage (auto mode). In all cases, Dualformer outperforms the corresponding baseline models in both performance and computational efficiency: (1) in slow mode, Dualformer optimally solves unseen 30 x 30 maze navigation tasks 97.6% of the time, surpassing the Searchformer (trained on data with complete reasoning traces) baseline performance of 93.3%, while only using 45.5% fewer reasoning steps; (2) in fast mode, Dualformer completes those tasks with an 80% optimal rate, significantly outperforming the Solution-Only model (trained on solution-only data), which has an optimal rate of only 30%. For math problems, our techniques have also achieved improved performance with LLM fine-tuning, showing its generalization beyond task-specific models.
Predicting molecular impact on cellular function is a core challenge in therapeutic design.
Phenomic experiments, designed to capture cellu… (voir plus)lar morphology, utilize microscopy based techniques and demonstrate a high throughput solution for uncovering molecular impact on the cell.
In this work, we learn a joint latent space between molecular structures and microscopy phenomic experiments, aligning paired samples with contrastive learning.
Specifically, we study the problem of Contrastive PhenoMolecular Retrieval, which consists of zero-shot molecular structure identification conditioned on phenomic experiments.
We assess challenges in multi-modal learning of phenomics and molecular modalities such as experimental batch effect, inactive molecule perturbations, and encoding perturbation concentration.
We demonstrate improved multi-modal learner retrieval through (1) a uni-modal pre-trained phenomics model, (2) a novel inter sample similarity aware loss, and (3) models conditioned on a representation of molecular concentration.
Following this recipe, we propose MolPhenix, a molecular phenomics model.
MolPhenix leverages a pre-trained phenomics model to demonstrate significant performance gains across perturbation concentrations, molecular scaffolds, and activity thresholds.
In particular, we demonstrate an 8.1