Portrait of Ayla Rigouts Terryn

Ayla Rigouts Terryn

Associate Academic Member
Assistant Professor, Université de Montréal, Linguistics and translation
Research Topics
Information Retrieval
Large Language Models (LLM)
Linguistic Evaluation of Language Models
Machine Translation
Natural Language Processing
Terminology

Biography

Ayla Rigouts Terryn is an Assistant Professor of Translation Technologies and AI in the Department of Linguistics and Translation at Université de Montréal and an Associate Academic Member at Mila. She is also an IVADO professor (regroupement 3: natural language processing (NLP)) and holds the IVADO-FRQ chair "At the Crossroads of Languages and AI: Towards a Synergy Between Language Expertise and Computational Innovation".

She obtained a Master’s degree in translation from Antwerp University, obtained her PhD on automatic terminology extraction from Ghent University, and specialised in multilingual language technology as a senior researcher at KU Leuven. As a linguist in the field of AI, she is passionate about advancing our understanding of language models with linguistic insight. She explores NLP in multilingual and non-English contexts (including low-resource scenarios), nuanced analysis and evaluation of large language models, and translation technology for domain-specific texts and terminology.

Publications

LLMs and Cultural Values: the Impact of Prompt Language and Explicit Cultural Framing
Bram Bulté
Large Language Models (LLMs) are rapidly being adopted by users across the globe, who interact with them in a diverse range of languages. At… (see more) the same time, there are well-documented imbalances in the training data and optimisation objectives of this technology, raising doubts as to whether LLMs can represent the cultural diversity of their broad user base. In this study, we look at LLMs and cultural values and examine how prompt language and cultural framing influence model responses and their alignment with human values in different countries. We probe 10 LLMs with 63 items from the Hofstede Values Survey Module and World Values Survey, translated into 11 languages, and formulated as prompts with and without different explicit cultural perspectives. Our study confirms that both prompt language and cultural perspective produce variation in LLM outputs, but with an important caveat: While targeted prompting can, to a certain extent, steer LLM responses in the direction of the predominant values of the corresponding countries, it does not overcome the models' systematic bias toward the values associated with a restricted set of countries in our dataset: the Netherlands, Germany, the US, and Japan. All tested models, regardless of their origin, exhibit remarkably similar patterns: They produce fairly neutral responses on most topics, with selective progressive stances on issues such as social tolerance. Alignment with cultural values of human respondents is improved more with an explicit cultural perspective than with a targeted prompt language. Unexpectedly, combining both approaches is no more effective than cultural framing with an English prompt. These findings reveal that LLMs occupy an uncomfortable middle ground: They are responsive enough to changes in prompts to produce variation, but too firmly anchored to specific cultural defaults to adequately represent cultural diversity.
Introduction to the special issue on Computational Terminology
Patrick Drouin
Proceedings of the 18th Workshop on Building and Using Comparable Corpora (BUCC)
Serge Sharoff
Pierre Zweigenbaum
Reinhard Rapp
THInC: A Theory-Driven Framework for Computational Humor Detection
Victor De Marez
Thomas Winters
Humor is a fundamental aspect of human communication and cognition, as it plays a crucial role in social engagement. Although theories about… (see more) humor have evolved over centuries, there is still no agreement on a single, comprehensive humor theory. Likewise, computationally recognizing humor remains a significant challenge despite recent advances in large language models. Moreover, most computational approaches to detecting humor are not based on existing humor theories. This paper contributes to bridging this long-standing gap between humor theory research and computational humor detection by creating an interpretable framework for humor classification, grounded in multiple humor theories, called THInC (Theory-driven Humor Interpretation and Classification). THInC ensembles interpretable GA2M classifiers, each representing a different humor theory. We engineered a transparent flow to actively create proxy features that quantitatively reflect different aspects of theories. An implementation of this framework achieves an F1 score of 0.85. The associative interpretability of the framework enables analysis of proxy efficacy, alignment of joke features with theories, and identification of globally contributing features. This paper marks a pioneering effort in creating a humor detection framework that is informed by diverse humor theories and offers a foundation for future advancements in theory-driven humor classification. It also serves as a first step in automatically comparing humor theories in a quantitative manner.
THInC: A Theory-Driven Framework for Computational Humor Detection
Victor De Marez
Thomas Winters
Humor is a fundamental aspect of human communication and cognition, as it plays a crucial role in social engagement. Although theories about… (see more) humor have evolved over centuries, there is still no agreement on a single, comprehensive humor theory. Likewise, computationally recognizing humor remains a significant challenge despite recent advances in large language models. Moreover, most computational approaches to detecting humor are not based on existing humor theories. This paper contributes to bridging this long-standing gap between humor theory research and computational humor detection by creating an interpretable framework for humor classification, grounded in multiple humor theories, called THInC (Theory-driven Humor Interpretation and Classification). THInC ensembles interpretable GA2M classifiers, each representing a different humor theory. We engineered a transparent flow to actively create proxy features that quantitatively reflect different aspects of theories. An implementation of this framework achieves an F1 score of 0.85. The associative interpretability of the framework enables analysis of proxy efficacy, alignment of joke features with theories, and identification of globally contributing features. This paper marks a pioneering effort in creating a humor detection framework that is informed by diverse humor theories and offers a foundation for future advancements in theory-driven humor classification. It also serves as a first step in automatically comparing humor theories in a quantitative manner.
Exploratory Study on the Impact of English Bias of Generative Large Language Models in Dutch and French
Miryam de Lhoneux
The most widely used LLMs like GPT4 and Llama 2 are trained on large amounts of data, mostly in English but are still able to deal with non-… (see more)English languages. This English bias leads to lower performance in other languages, especially low-resource ones. This paper studies the linguistic quality of LLMs in two non-English high-resource languages: Dutch and French, with a focus on the influence of English. We first construct a comparable corpus of text generated by humans versus LLMs (GPT-4, Zephyr, and GEITje) in the news domain. We proceed to annotate linguistic issues in the LLM-generated texts, obtaining high inter-annotator agreement, and analyse these annotated issues. We find a substantial influence of English for all models under all conditions: on average, 16% of all annotations of linguistic errors or peculiarities had a clear link to English. Fine-tuning a LLM to a target language (GEITje is fine-tuned on Dutch) reduces the number of linguistic issues and probably also the influence of English. We further find that using a more elaborate prompt leads to linguistically better results than a concise prompt. Finally, increasing the temperature for one of the models leads to lower linguistic quality but does not alter the influence of English.
Exploratory Study on the Impact of English Bias of Generative Large Language Models in Dutch and French
Miryam de Lhoneux