Peu importe la taille : démocratiser la découverte de protéines avec l'IA
Des chercheurs de Mila ont créé un puissant modèle de langage protéique à source ouverte plus compact et efficace afin de démocratiser la découverte de protéines.
La prochaine cohorte de notre programme, conçu pour fournir aux participant·e·s une compréhension fondamentale des technologies de l'IA, se déroulera à Ottawa les 28 et 29 novembre.
Nous utilisons des témoins pour analyser le trafic et l’utilisation de notre site web, afin de personnaliser votre expérience. Vous pouvez désactiver ces technologies à tout moment, mais cela peut restreindre certaines fonctionnalités du site. Consultez notre Politique de protection de la vie privée pour en savoir plus.
Paramètre des cookies
Vous pouvez activer et désactiver les types de cookies que vous souhaitez accepter. Cependant certains choix que vous ferez pourraient affecter les services proposés sur nos sites (ex : suggestions, annonces personnalisées, etc.).
Cookies essentiels
Ces cookies sont nécessaires au fonctionnement du site et ne peuvent être désactivés. (Toujours actif)
Cookies analyse
Acceptez-vous l'utilisation de cookies pour mesurer l'audience de nos sites ?
Multimedia Player
Acceptez-vous l'utilisation de cookies pour afficher et vous permettre de regarder les contenus vidéo hébergés par nos partenaires (YouTube, etc.) ?
Publications
Model approximation in MDPs with unbounded per-step cost
The design of the proposal distributions, and most notably the kernel parameters, are crucial for the performance of Markov chain Monte Carl… (voir plus)o (MCMC) rendering. A poor selection of parameters can increase the correlation of the Markov chain and result in bad rendering performance. We approach this problem by a novel path perturbation strategy for online-learning of state-dependent kernel parameters. We base our approach on the theoretical framework of regional adaptive MCMC which enables the adaptation of parameters depending on the region of the state space which contains the current sample, and on information collected from previous samples. For this, we define a partitioning of the path space on a low-dimensional canonical space to capture the characteristics of paths, with a focus on path segments closer to the sensor. Fast convergence is achieved by adaptive refinement of the partitions. Exemplarily, we present two novel regional adaptive path perturbation techniques akin to lens and multi-chain perturbations. Our approach can easily be used on top of existing path space MLT methods to improve rendering efficiency, while being agnostic to the initial choice of kernel parameters.
An enhanced wideband tracking method for characteristic modes (CMs) is investigated in this paper. The method consists of three stages, and … (voir plus)its core tracking stage (CTS) is based on a classical eigenvector correlation-based algorithm. To decrease the tracking time and eliminate the crossing avoidance (CRA), we append a commonly used eigenvalue filter (EF) as the preprocessing stage and a novel postprocessing stage to the CTS. The proposed postprocessing stage can identify all CRA mode pairs by analyzing their trajectory and correlation characteristics. Subsequently, it can predict corresponding CRA frequencies and correct problematic qualities rapidly. Considering potential variations in eigenvector numbers at consecutive frequency samples caused by the EF, a new execution condition for the adaptive frequency adjustment in the CTS is introduced. Finally, CMs of a conductor plate and a fractal structure are investigated to demonstrate the performance of the proposed method, and the obtained results are discussed.
2024-02-12
International Journal of Microwave and Wireless Technologies (publié)
The federated learning paradigm has motivated the development of methods for aggregating multiple client updates into a global server model,… (voir plus) without sharing client data. Many federated learning algorithms, including the canonical Federated Averaging (FedAvg), take a direct (possibly weighted) average of the client parameter updates, motivated by results in distributed optimization. In this work, we adopt a function space perspective and propose a new algorithm, FedFish, that aggregates local approximations to the functions learned by clients, using an estimate based on their Fisher information. We evaluate FedFish on realistic, large-scale cross-device benchmarks. While the performance of FedAvg can suffer as client models drift further apart, we demonstrate that FedFish is more robust to longer local training. Our evaluation across several settings in image and language benchmarks shows that FedFish outperforms FedAvg as local training epochs increase. Further, FedFish results in global networks that are more amenable to efficient personalization via local fine-tuning on the same or shifted data distributions. For instance, federated pretraining on the C4 dataset, followed by few-shot personalization on Stack Overflow, results in a 7% improvement in next-token prediction by FedFish over FedAvg.
Efficiently generating statistically independent samples from an unnormalized probability distribution, such as equilibrium samples of many-… (voir plus)body systems, is a foundational problem in science. In this paper, we propose Iterated Denoising Energy Matching (iDEM), an iterative algorithm that uses a novel stochastic score matching objective leveraging solely the energy function and its gradient -- and no data samples -- to train a diffusion-based sampler. Specifically, iDEM alternates between (I) sampling regions of high model density from a diffusion-based sampler and (II) using these samples in our stochastic matching objective to further improve the sampler. iDEM is scalable to high dimensions as the inner matching objective, is simulation-free, and requires no MCMC samples. Moreover, by leveraging the fast mode mixing behavior of diffusion, iDEM smooths out the energy landscape enabling efficient exploration and learning of an amortized sampler. We evaluate iDEM on a suite of tasks ranging from standard synthetic energy functions to invariant
Common self-improvement approaches for large language models (LLMs), such as STaR (Zelikman et al., 2022), iteratively fine-tune LLMs on sel… (voir plus)f-generated solutions to improve their problem-solving ability. However, these approaches discard the large amounts of incorrect solutions generated during this process, potentially neglecting valuable information in such solutions. To address this shortcoming, we propose V-STaR that utilizes both the correct and incorrect solutions generated during the self-improvement process to train a verifier using DPO that judges correctness of model-generated solutions. This verifier is used at inference time to select one solution among many candidate solutions. Running V-STaR for multiple iterations results in progressively better reasoners and verifiers, delivering a 4% to 17% test accuracy improvement over existing self-improvement and verification approaches on common code generation and math reasoning benchmarks with LLaMA2 models.