Nous utilisons des témoins pour analyser le trafic et l’utilisation de notre site web, afin de personnaliser votre expérience. Vous pouvez désactiver ces technologies à tout moment, mais cela peut restreindre certaines fonctionnalités du site. Consultez notre Politique de protection de la vie privée pour en savoir plus.
Paramètre des cookies
Vous pouvez activer et désactiver les types de cookies que vous souhaitez accepter. Cependant certains choix que vous ferez pourraient affecter les services proposés sur nos sites (ex : suggestions, annonces personnalisées, etc.).
Cookies essentiels
Ces cookies sont nécessaires au fonctionnement du site et ne peuvent être désactivés. (Toujours actif)
Cookies analyse
Acceptez-vous l'utilisation de cookies pour mesurer l'audience de nos sites ?
Multimedia Player
Acceptez-vous l'utilisation de cookies pour afficher et vous permettre de regarder les contenus vidéo hébergés par nos partenaires (YouTube, etc.) ?
Publications
Higher Order Transformers: Enhancing Stock Movement Prediction On Multimodal Time-Series Data
In this paper, we tackle the challenge of predicting stock movements in financial markets by introducing Higher Order Transformers, a novel … (voir plus)architecture designed for processing multivariate time-series data. We extend the self-attention mechanism and the transformer architecture to a higher order, effectively capturing complex market dynamics across time and variables. To manage computational complexity, we propose a low-rank approximation of the potentially large attention tensor using tensor decomposition and employ kernel attention, reducing complexity to linear with respect to the data size. Additionally, we present an encoder-decoder model that integrates technical and fundamental analysis, utilizing multimodal signals from historical prices and related tweets. Our experiments on the Stocknet dataset demonstrate the effectiveness of our method, highlighting its potential for enhancing stock movement prediction in financial markets.
Cortical dynamics underlie many cognitive processes and emerge from complex multi-scale interactions, which are challenging to study in vivo… (voir plus). Large-scale, biophysically detailed models offer a tool which can complement laboratory approaches. We present a model comprising eight somatosensory cortex subregions, 4.2 million morphological and electrically-detailed neurons, and 13.2 billion local and mid-range synapses. In silico tools enabled reproduction and extension of complex laboratory experiments under a single parameterization, providing strong validation. The model reproduced millisecond-precise stimulus-responses, stimulus-encoding under targeted optogenetic activation, and selective propagation of stimulus-evoked activity to downstream areas. The model’s direct correspondence with biology generated predictions about how multiscale organization shapes activity; for example, how cortical activity is shaped by high-dimensional connectivity motifs in local and mid-range connectivity, and spatial targeting rules by inhibitory subpopulations. The latter was facilitated using a rewired connectome which included specific targeting rules observed for different inhibitory neuron types in electron microscopy. The model also predicted the role of inhibitory interneuron types and different layers in stimulus encoding. Simulation tools and a large subvolume of the model are made available to enable further community-driven improvement, validation and investigation.
Large language models must balance their weight-encoded knowledge with in-context information from prompts to generate accurate responses. T… (voir plus)his paper investigates this interplay by analyzing how models of varying capacities within the same family handle intentionally misleading in-context information. Our experiments demonstrate that larger models exhibit higher resilience to deceptive prompts, showcasing an advanced ability to interpret and integrate prompt information with their internal knowledge. Furthermore, we find that larger models outperform smaller ones in following legitimate instructions, indicating that their resilience is not due to disregarding in-context information. We also show that this phenomenon is likely not a result of memorization but stems from the models' ability to better leverage implicit task-relevant information from the prompt alongside their internally stored knowledge.
Large language models must balance their weight-encoded knowledge with in-context information from prompts to generate accurate responses. T… (voir plus)his paper investigates this interplay by analyzing how models of varying capacities within the same family handle intentionally misleading in-context information. Our experiments demonstrate that larger models exhibit higher resilience to deceptive prompts, showcasing an advanced ability to interpret and integrate prompt information with their internal knowledge. Furthermore, we find that larger models outperform smaller ones in following legitimate instructions, indicating that their resilience is not due to disregarding in-context information. We also show that this phenomenon is likely not a result of memorization but stems from the models' ability to better leverage implicit task-relevant information from the prompt alongside their internally stored knowledge.
Large language models must balance their weight-encoded knowledge with in-context information from prompts to generate accurate responses. T… (voir plus)his paper investigates this interplay by analyzing how models of varying capacities within the same family handle intentionally misleading in-context information. Our experiments demonstrate that larger models exhibit higher resilience to deceptive prompts, showcasing an advanced ability to interpret and integrate prompt information with their internal knowledge. Furthermore, we find that larger models outperform smaller ones in following legitimate instructions, indicating that their resilience is not due to disregarding in-context information. We also show that this phenomenon is likely not a result of memorization but stems from the models' ability to better leverage implicit task-relevant information from the prompt alongside their internally stored knowledge.
Large language models must balance their weight-encoded knowledge with in-context information from prompts to generate accurate responses. T… (voir plus)his paper investigates this interplay by analyzing how models of varying capacities within the same family handle intentionally misleading in-context information. Our experiments demonstrate that larger models exhibit higher resilience to deceptive prompts, showcasing an advanced ability to interpret and integrate prompt information with their internal knowledge. Furthermore, we find that larger models outperform smaller ones in following legitimate instructions, indicating that their resilience is not due to disregarding in-context information. We also show that this phenomenon is likely not a result of memorization but stems from the models' ability to better leverage implicit task-relevant information from the prompt alongside their internally stored knowledge.
Software technologies are used by programmers with diverse backgrounds. To fulfill programmers' need for information, enthusiasts contribute… (voir plus) numerous learning resources that vary in style and content, which act as documentation for the corresponding technology. We interviewed 26 volunteer documentation contributors, i.e. documentors, to understand why and how they create such documentation. From a qualitative analysis of our interviews, we identified a total of sixteen considerations that documentors have during the documentation contribution process, along three dimensions, namely motivations, topic selection techniques, and styling objectives. We grouped related considerations based on common underlying themes, to elicit five software documentor mindsets that occur during documentation contribution activities. We propose a structure of mindsets, and their associated considerations across the three dimensions, as a framework for reasoning about the documentation contribution process. This framework can inform information seeking as well as documentation creation tools about the context in which documentation was contributed.
Software technologies are used by programmers with diverse backgrounds. To fulfill programmers' need for information, enthusiasts contribute… (voir plus) numerous learning resources that vary in style and content, which act as documentation for the corresponding technology. We interviewed 26 volunteer documentation contributors, i.e. documentors, to understand why and how they create such documentation. From a qualitative analysis of our interviews, we identified a total of sixteen considerations that documentors have during the documentation contribution process, along three dimensions, namely motivations, topic selection techniques, and styling objectives. We grouped related considerations based on common underlying themes, to elicit five software documentor mindsets that occur during documentation contribution activities. We propose a structure of mindsets, and their associated considerations across the three dimensions, as a framework for reasoning about the documentation contribution process. This framework can inform information seeking as well as documentation creation tools about the context in which documentation was contributed.
Software technologies are used by programmers with diverse backgrounds. To fulfill programmers' need for information, enthusiasts contribute… (voir plus) numerous learning resources that vary in style and content, which act as documentation for the corresponding technology. We interviewed 26 volunteer documentation contributors, i.e. documentors, to understand why and how they create such documentation. From a qualitative analysis of our interviews, we identified a total of sixteen considerations that documentors have during the documentation contribution process, along three dimensions, namely motivations, topic selection techniques, and styling objectives. We grouped related considerations based on common underlying themes, to elicit five software documentor mindsets that occur during documentation contribution activities. We propose a structure of mindsets, and their associated considerations across the three dimensions, as a framework for reasoning about the documentation contribution process. This framework can inform information seeking as well as documentation creation tools about the context in which documentation was contributed.
We examine the capability of Multimodal Large Language Models (MLLMs) to tackle diverse domains that extend beyond the traditional language … (voir plus)and vision tasks these models are typically trained on. Specifically, our focus lies in areas such as Embodied AI, Games, UI Control, and Planning. To this end, we introduce a process of adapting an MLLM to a Generalist Embodied Agent (GEA). GEA is a single unified model capable of grounding itself across these varied domains through a multi-embodiment action tokenizer. GEA is trained with supervised learning on a large dataset of embodied experiences and with online RL in interactive simulators. We explore the data and algorithmic choices necessary to develop such a model. Our findings reveal the importance of training with cross-domain data and online RL for building generalist agents. The final GEA model achieves strong generalization performance to unseen tasks across diverse benchmarks compared to other generalist models and benchmark-specific approaches.
We examine the capability of Multimodal Large Language Models (MLLMs) to tackle diverse domains that extend beyond the traditional language … (voir plus)and vision tasks these models are typically trained on. Specifically, our focus lies in areas such as Embodied AI, Games, UI Control, and Planning. To this end, we introduce a process of adapting an MLLM to a Generalist Embodied Agent (GEA). GEA is a single unified model capable of grounding itself across these varied domains through a multi-embodiment action tokenizer. GEA is trained with supervised learning on a large dataset of embodied experiences and with online RL in interactive simulators. We explore the data and algorithmic choices necessary to develop such a model. Our findings reveal the importance of training with cross-domain data and online RL for building generalist agents. The final GEA model achieves strong generalization performance to unseen tasks across diverse benchmarks compared to other generalist models and benchmark-specific approaches.