Nous utilisons des témoins pour analyser le trafic et l’utilisation de notre site web, afin de personnaliser votre expérience. Vous pouvez désactiver ces technologies à tout moment, mais cela peut restreindre certaines fonctionnalités du site. Consultez notre Politique de protection de la vie privée pour en savoir plus.
Paramètre des cookies
Vous pouvez activer et désactiver les types de cookies que vous souhaitez accepter. Cependant certains choix que vous ferez pourraient affecter les services proposés sur nos sites (ex : suggestions, annonces personnalisées, etc.).
Cookies essentiels
Ces cookies sont nécessaires au fonctionnement du site et ne peuvent être désactivés. (Toujours actif)
Cookies analyse
Acceptez-vous l'utilisation de cookies pour mesurer l'audience de nos sites ?
Multimedia Player
Acceptez-vous l'utilisation de cookies pour afficher et vous permettre de regarder les contenus vidéo hébergés par nos partenaires (YouTube, etc.) ?
Publications
Unmixing Optical Signals from Undersampled Volumetric Measurements by Filtering the Pixel Latent Variables
The development of signal unmixing algorithms is essential for leveraging multimodal datasets acquired through a wide array of scientific im… (voir plus)aging technologies, including hyperspectral or time-resolved acquisitions. In experimental physics, enhancing the spatio-temporal resolution or expanding the number of detection channels often leads to diminished sampling rate and signal-to-noise ratio (SNR), significantly affecting the efficacy of signal unmixing algorithms. We propose Latent Unmixing, a new approach which applies band-pass filters to the latent space of a multi-dimensional convolutional neural network to disentangle overlapping signal components. It enables better isolation and quantification of individual signal contributions, especially in the context of undersampled distributions. Using multi-dimensional convolution kernels to process all dimensions simultaneously enhances the network's ability to extract information from adjacent pixels, and time- or spectral-bins. This approach enables more effective separation of components in cases where individual pixels do not provide clear, well-resolved information. We showcase the method's practical use in experimental physics through two test cases that highlight the versatility of our approach: fluorescence lifetime microscopy and mode decomposition in optical fibers. The latent unmixing method extracts valuable information from complex signals that cannot be resolved by standard methods. It opens new possibilities in optics and photonics for multichannel separations at an increased sampling rate.
In the realm of antibody therapeutics development, increasing the binding affinity of an antibody to its target antigen is a crucial task. T… (voir plus)his paper presents GearBind, a pretrainable deep neural network designed to be effective for in silico affinity maturation. Leveraging multi-level geometric message passing alongside contrastive pretraining on protein structural data, GearBind capably models the complex interplay of atom-level interactions within protein complexes, surpassing previous state-of-the-art approaches on SKEMPI v2 in terms of Pearson correlation, mean absolute error (MAE) and root mean square error (RMSE). In silico experiments elucidate that pretraining helps GearBind become sensitive to mutation-induced binding affinity changes and reflective of amino acid substitution tendency. Using an ensemble model based on pretrained GearBind, we successfully optimize the affinity of CR3022 to the spike (S) protein of the SARS-CoV-2 Omicron strain. Our strategy yields a high success rate with up to 17-fold affinity increase. GearBind proves to be an effective tool in narrowing the search space for in vitro antibody affinity maturation, underscoring the utility of geometric deep learning and adept pre-training in macromolecule interaction modeling.
Finetuning language models with reinforcement learning (RL), e.g. from human feedback (HF), is a prominent method for alignment. But optimiz… (voir plus)ing against a reward model can improve on reward while degrading performance in other areas, a phenomenon known as reward hacking, alignment tax, or language drift. First, we argue that commonly-used test metrics are insufficient and instead measure how different algorithms tradeoff between reward and drift. The standard method modified the reward with a Kullback-Lieber (KL) penalty between the online and initial model. We propose Elastic Reset, a new algorithm that achieves higher reward with less drift without explicitly modifying the training objective. We periodically reset the online model to an exponentially moving average (EMA) of itself, then reset the EMA model to the initial model. Through the use of an EMA, our model recovers quickly after resets and achieves higher reward with less drift in the same number of steps. We demonstrate that fine-tuning language models with Elastic Reset leads to state-of-the-art performance on a small scale pivot-translation benchmark, outperforms all baselines in a medium-scale RLHF-like IMDB mock sentiment task and leads to a more performant and more aligned technical QA chatbot with LLaMA-7B. Code available at github.com/mnoukhov/elastic-reset.
Parameter-efficient transfer learning (PETL) methods have emerged as a solid alternative to the standard full fine-tuning approach. They onl… (voir plus)y train a few extra parameters for each downstream task, without sacrificing performance and dispensing with the issue of storing a copy of the pre-trained model for each task. For audio classification tasks, the Audio Spectrogram Transformer (AST) model shows impressive results. However, surprisingly, how to efficiently adapt it to several downstream tasks has not been tackled before. In this paper, we bridge this gap and present a detailed investigation of common PETL methods for the adaptation of the AST model to audio/speech tasks. Furthermore, we propose a new adapter design that exploits the convolution module of the Conformer model, leading to superior performance over the standard PETL approaches and surpassing or achieving performance parity with full fine-tuning by updating only 0.29% of the parameters. Finally, we provide ablation studies revealing that our proposed adapter: 1) proves to be effective in few-shot efficient transfer learning, 2) attains optimal results regardless of the amount of the allocated parameters, and 3) can be applied to other pre-trained models. Our code is available at https:/github.com/umbertocappellazzo/PETL_AST.
Climate models, such as Earth system models (ESMs), are crucial for simulating future climate change based on projected Shared Socioeconomic… (voir plus) Pathways (SSP) greenhouse gas emissions scenarios. While ESMs are sophisticated and invaluable, machine learning-based emulators trained on existing simulation data can project additional climate scenarios much faster and are computationally efficient. However, they often lack generalizability and interpretability. This work delves into the potential of causal representation learning, specifically the \emph{Causal Discovery with Single-parent Decoding} (CDSD) method, which could render climate model emulation efficient \textit{and} interpretable. We evaluate CDSD on multiple climate datasets, focusing on emissions, temperature, and precipitation. Our findings shed light on the challenges, limitations, and promise of using CDSD as a stepping stone towards more interpretable and robust climate model emulation.
To deal with notorious delays in communication systems, it is crucial to forecast key system characteristics, such as the communication load… (voir plus). Most existing studies aggregate data from multiple edge nodes for improving the forecasting accuracy. However, the bandwidth cost of such data aggregation could be unacceptably high from the perspective of system operators. To achieve both the high forecasting accuracy and bandwidth efficiency, this paper proposes an Adaptive Multi-Teacher Weighting in Teacher-Student Learning approach, namely AdaTeacher, for communication load forecasting of multiple edge nodes. Each edge node trains a local model on its own data. A target node collects multiple models from its neighbor nodes and treats these models as teachers. Then, the target node trains a student model from teachers via Teacher-Student (T-S) learning. Unlike most existing T-S learning approaches that treat teachers evenly, resulting in a limited performance, AdaTeacher introduces a bilevel optimization algorithm to dynamically learn an importance weight for each teacher toward a more effective and accurate T-S learning process. Compared to the state-of-the-art methods, Ada Teacher not only reduces the bandwidth cost by 53.85%, but also improves the load forecasting accuracy by 21.56% and 24.24% on two real-world datasets.
With the increasing use of data-intensive mobile applications and the number of mobile users, the demand for wireless data services has been… (voir plus) increasing exponentially in recent years. In order to address this demand, a large number of new cellular base stations are being deployed around the world, leading to a significant increase in energy consumption and greenhouse gas emission. Consequently, energy consumption has emerged as a key concern in the fifth-generation (5G) network era and beyond. Reinforcement learning (RL), which aims to learn a control policy via interacting with the environment, has been shown to be effective in addressing network optimization problems. However, for reinforcement learning, especially deep reinforcement learning, a large number of interactions with the environment are required. This often limits its applicability in the real world. In this work, to better deal with dynamic traffic scenarios and improve real-world applicability, we propose a transfer deep reinforcement learning framework for energy optimization in cellular communication networks. Specifically, we first pre-train a set of RL-based energy-saving policies on source base stations and then transfer the most suitable policy to the given target base station in an unsupervised learning manner. Experimental results demonstrate that base station energy consumption can be reduced significantly using this approach.
Dynamic link prediction is a crucial task in the study of evolving graphs, which serve as abstract models for various real-world application… (voir plus)s. Recent dynamic graph representation learning models have claimed near-perfect performance in this task. However, we argue that the standard evaluation strategy for dynamic link prediction overlooks the sparsity and recurrence patterns inherent in dynamic networks. Specifically, the current strategy suffers from issues such as evaluating models on a balanced set of positive and negative edges, neglecting the reassessment of frequently recurring positive edges, and lacking a comprehensive evaluation of both recurring and new edges.To address these limitations, we propose a novel evaluation strategy called EXHAUSTIVE, which takes into account all relevant negative edges and separately assesses the performance on recurring and new edges. Using our proposed evaluation strategy, we compare the performance of five state-of-the-art dynamic graph learning models on seven benchmark datasets. Compared to the previous common evaluation strategy, we observe an average drop of 62% in Average Precision for dynamic link prediction. Additionally, the ranking of the models also changes under the new evaluation setting. Furthermore, we demonstrate that while all models perform considerably worse when predicting new edges compared to recurring ones, the best performing models differ between the two scenarios. This highlights the importance of employing the proposed evaluation strategy for both the assessment and design of dynamic link prediction models. By adopting our novel evaluation strategy, researchers can obtain a more accurate understanding of model performance in dynamic link prediction, leading to improved evaluation and design of such models.
2023-12-04
2023 IEEE International Conference on Data Mining Workshops (ICDMW) (publié)
Dynamic link prediction is a crucial task in the study of evolving graphs, which serve as abstract models for various real-world application… (voir plus)s. Recent dynamic graph representation learning models have claimed near-perfect performance in this task. However, we argue that the standard evaluation strategy for dynamic link prediction overlooks the sparsity and recurrence patterns inherent in dynamic networks. Specifically, the current strategy suffers from issues such as evaluating models on a balanced set of positive and negative edges, neglecting the reassessment of frequently recurring positive edges, and lacking a comprehensive evaluation of both recurring and new edges.To address these limitations, we propose a novel evaluation strategy called EXHAUSTIVE, which takes into account all relevant negative edges and separately assesses the performance on recurring and new edges. Using our proposed evaluation strategy, we compare the performance of five state-of-the-art dynamic graph learning models on seven benchmark datasets. Compared to the previous common evaluation strategy, we observe an average drop of 62% in Average Precision for dynamic link prediction. Additionally, the ranking of the models also changes under the new evaluation setting. Furthermore, we demonstrate that while all models perform considerably worse when predicting new edges compared to recurring ones, the best performing models differ between the two scenarios. This highlights the importance of employing the proposed evaluation strategy for both the assessment and design of dynamic link prediction models. By adopting our novel evaluation strategy, researchers can obtain a more accurate understanding of model performance in dynamic link prediction, leading to improved evaluation and design of such models.
2023-12-04
2023 IEEE International Conference on Data Mining Workshops (ICDMW) (publié)