Too Big to Fool: Resisting Deception in Language Models
Mohammad Reza Samsami
M. L. Richter
Juan Rodriguez
Megh Thakkar
Large language models must balance their weight-encoded knowledge with in-context information from prompts to generate accurate responses. T… (see more)his paper investigates this interplay by analyzing how models of varying capacities within the same family handle intentionally misleading in-context information. Our experiments demonstrate that larger models exhibit higher resilience to deceptive prompts, showcasing an advanced ability to interpret and integrate prompt information with their internal knowledge. Furthermore, we find that larger models outperform smaller ones in following legitimate instructions, indicating that their resilience is not due to disregarding in-context information. We also show that this phenomenon is likely not a result of memorization but stems from the models' ability to better leverage implicit task-relevant information from the prompt alongside their internally stored knowledge.
Too Big to Fool: Resisting Deception in Language Models
Mohammad Reza Samsami
Mats Leon Richter
Juan A. Rodriguez
Megh Thakkar
Large language models must balance their weight-encoded knowledge with in-context information from prompts to generate accurate responses. T… (see more)his paper investigates this interplay by analyzing how models of varying capacities within the same family handle intentionally misleading in-context information. Our experiments demonstrate that larger models exhibit higher resilience to deceptive prompts, showcasing an advanced ability to interpret and integrate prompt information with their internal knowledge. Furthermore, we find that larger models outperform smaller ones in following legitimate instructions, indicating that their resilience is not due to disregarding in-context information. We also show that this phenomenon is likely not a result of memorization but stems from the models' ability to better leverage implicit task-relevant information from the prompt alongside their internally stored knowledge.
Too Big to Fool: Resisting Deception in Language Models
Mohammad Reza Samsami
M. L. Richter
Juan Rodriguez
Megh Thakkar
Large language models must balance their weight-encoded knowledge with in-context information from prompts to generate accurate responses. T… (see more)his paper investigates this interplay by analyzing how models of varying capacities within the same family handle intentionally misleading in-context information. Our experiments demonstrate that larger models exhibit higher resilience to deceptive prompts, showcasing an advanced ability to interpret and integrate prompt information with their internal knowledge. Furthermore, we find that larger models outperform smaller ones in following legitimate instructions, indicating that their resilience is not due to disregarding in-context information. We also show that this phenomenon is likely not a result of memorization but stems from the models' ability to better leverage implicit task-relevant information from the prompt alongside their internally stored knowledge.
Too Big to Fool: Resisting Deception in Language Models
Mohammad Reza Samsami
M. L. Richter
Juan Rodriguez
Megh Thakkar
Large language models must balance their weight-encoded knowledge with in-context information from prompts to generate accurate responses. T… (see more)his paper investigates this interplay by analyzing how models of varying capacities within the same family handle intentionally misleading in-context information. Our experiments demonstrate that larger models exhibit higher resilience to deceptive prompts, showcasing an advanced ability to interpret and integrate prompt information with their internal knowledge. Furthermore, we find that larger models outperform smaller ones in following legitimate instructions, indicating that their resilience is not due to disregarding in-context information. We also show that this phenomenon is likely not a result of memorization but stems from the models' ability to better leverage implicit task-relevant information from the prompt alongside their internally stored knowledge.
The Software Documentor Mindset
Deeksha M. Arya
Martin P. Robillard
Software technologies are used by programmers with diverse backgrounds. To fulfill programmers' need for information, enthusiasts contribute… (see more) numerous learning resources that vary in style and content, which act as documentation for the corresponding technology. We interviewed 26 volunteer documentation contributors, i.e. documentors, to understand why and how they create such documentation. From a qualitative analysis of our interviews, we identified a total of sixteen considerations that documentors have during the documentation contribution process, along three dimensions, namely motivations, topic selection techniques, and styling objectives. We grouped related considerations based on common underlying themes, to elicit five software documentor mindsets that occur during documentation contribution activities. We propose a structure of mindsets, and their associated considerations across the three dimensions, as a framework for reasoning about the documentation contribution process. This framework can inform information seeking as well as documentation creation tools about the context in which documentation was contributed.
The Software Documentor Mindset
Deeksha M. Arya
Martin P. Robillard
Software technologies are used by programmers with diverse backgrounds. To fulfill programmers' need for information, enthusiasts contribute… (see more) numerous learning resources that vary in style and content, which act as documentation for the corresponding technology. We interviewed 26 volunteer documentation contributors, i.e. documentors, to understand why and how they create such documentation. From a qualitative analysis of our interviews, we identified a total of sixteen considerations that documentors have during the documentation contribution process, along three dimensions, namely motivations, topic selection techniques, and styling objectives. We grouped related considerations based on common underlying themes, to elicit five software documentor mindsets that occur during documentation contribution activities. We propose a structure of mindsets, and their associated considerations across the three dimensions, as a framework for reasoning about the documentation contribution process. This framework can inform information seeking as well as documentation creation tools about the context in which documentation was contributed.
The Software Documentor Mindset
Deeksha M. Arya
Martin P. Robillard
Software technologies are used by programmers with diverse backgrounds. To fulfill programmers' need for information, enthusiasts contribute… (see more) numerous learning resources that vary in style and content, which act as documentation for the corresponding technology. We interviewed 26 volunteer documentation contributors, i.e. documentors, to understand why and how they create such documentation. From a qualitative analysis of our interviews, we identified a total of sixteen considerations that documentors have during the documentation contribution process, along three dimensions, namely motivations, topic selection techniques, and styling objectives. We grouped related considerations based on common underlying themes, to elicit five software documentor mindsets that occur during documentation contribution activities. We propose a structure of mindsets, and their associated considerations across the three dimensions, as a framework for reasoning about the documentation contribution process. This framework can inform information seeking as well as documentation creation tools about the context in which documentation was contributed.
Effects of gene dosage on cognitive ability: A function-based association study across brain and non-brain processes
Guillaume Huguet
Thomas Renne
Cécile Poulain
Alma Dubuc
Kuldeep Kumar
Sayeh Kazem
Worrawat Engchuan
Omar Shanta
Elise Douard
Catherine Proulx
Martineau Jean-Louis
Zohra Saci
Josephine Mollon
Laura Schultz
Emma E M Knowles
Simon R. Cox
David Porteous
Gail Davies
Paul Redmond
Sarah E. Harris … (see 10 more)
Gunter Schumann
Aurélie Labbe
Zdenka Pausova
Tomas Paus
Stephen W Scherer
Jonathan Sebat
Laura Almasy
David C. Glahn
Sébastien Jacquemont
From Multimodal LLMs to Generalist Embodied Agents: Methods and Lessons
Andrew Szot
Omar Attia
Aleksei Timofeev
Harsh Agrawal
Zhe Gan
Zsolt Kira
Alexander T Toshev
We examine the capability of Multimodal Large Language Models (MLLMs) to tackle diverse domains that extend beyond the traditional language … (see more)and vision tasks these models are typically trained on. Specifically, our focus lies in areas such as Embodied AI, Games, UI Control, and Planning. To this end, we introduce a process of adapting an MLLM to a Generalist Embodied Agent (GEA). GEA is a single unified model capable of grounding itself across these varied domains through a multi-embodiment action tokenizer. GEA is trained with supervised learning on a large dataset of embodied experiences and with online RL in interactive simulators. We explore the data and algorithmic choices necessary to develop such a model. Our findings reveal the importance of training with cross-domain data and online RL for building generalist agents. The final GEA model achieves strong generalization performance to unseen tasks across diverse benchmarks compared to other generalist models and benchmark-specific approaches.
From Multimodal LLMs to Generalist Embodied Agents: Methods and Lessons
Andrew Szot
Omar Attia
Aleksei Timofeev
Harsh Agrawal
Zhe Gan
Zsolt Kira
Alexander T Toshev
We examine the capability of Multimodal Large Language Models (MLLMs) to tackle diverse domains that extend beyond the traditional language … (see more)and vision tasks these models are typically trained on. Specifically, our focus lies in areas such as Embodied AI, Games, UI Control, and Planning. To this end, we introduce a process of adapting an MLLM to a Generalist Embodied Agent (GEA). GEA is a single unified model capable of grounding itself across these varied domains through a multi-embodiment action tokenizer. GEA is trained with supervised learning on a large dataset of embodied experiences and with online RL in interactive simulators. We explore the data and algorithmic choices necessary to develop such a model. Our findings reveal the importance of training with cross-domain data and online RL for building generalist agents. The final GEA model achieves strong generalization performance to unseen tasks across diverse benchmarks compared to other generalist models and benchmark-specific approaches.
From Multimodal LLMs to Generalist Embodied Agents: Methods and Lessons
Andrew Szot
Omar Attia
Aleksei Timofeev
Harsh Agrawal
Zhe Gan
Zsolt Kira
Alexander T Toshev
We examine the capability of Multimodal Large Language Models (MLLMs) to tackle diverse domains that extend beyond the traditional language … (see more)and vision tasks these models are typically trained on. Specifically, our focus lies in areas such as Embodied AI, Games, UI Control, and Planning. To this end, we introduce a process of adapting an MLLM to a Generalist Embodied Agent (GEA). GEA is a single unified model capable of grounding itself across these varied domains through a multi-embodiment action tokenizer. GEA is trained with supervised learning on a large dataset of embodied experiences and with online RL in interactive simulators. We explore the data and algorithmic choices necessary to develop such a model. Our findings reveal the importance of training with cross-domain data and online RL for building generalist agents. The final GEA model achieves strong generalization performance to unseen tasks across diverse benchmarks compared to other generalist models and benchmark-specific approaches.
Harnessing pre-trained generalist agents for software engineering tasks
Paulina Stevia Nouwou Mindom
Amin Nikanjam