Portrait de Jean-François Godbout

Jean-François Godbout

Membre académique associé
Professeur titulaire, Université de Montréal
Sujets de recherche
Désinformation
Modèles génératifs
Sécurité de l'IA

Biographie

Jean-François Godbout est professeur au département de science politique de l'Université de Montréal et membre académique associé à Mila - Institut québécois d'intelligence artificielle. Ses recherches portent principalement sur les sciences sociales computationnelles, la sécurité de l'IA et l'impact de l'IA générative sur la société. Il est actuellement directeur du programme de premier cycle en analyse des données en sciences humaines à l'Université de Montréal et chercheur à IVADO.

Étudiants actuels

Postdoctorat - UdeM
Maîtrise recherche - UdeM
Co-superviseur⋅e :
Maîtrise recherche - UdeM
Co-superviseur⋅e :

Publications

Regional and Temporal Patterns of Partisan Polarization during the COVID-19 Pandemic in the United States and Canada
Zachary Yang
Anne Imouza
Maximilian Puelma Touzel
C'ecile Amadoro
Gabrielle Desrosiers-Brisebois
Kellin Pelrine
Sacha Lévy
Public health measures were among the most polarizing topics debated online during the COVID-19 pandemic. Much of the discussion surrounded … (voir plus)specific events, such as when and which particular interventions came into practise. In this work, we develop and apply an approach to measure subnational and event-driven variation of partisan polarization and explore how these dynamics varied both across and within countries. We apply our measure to a dataset of over 50 million tweets posted during late 2020, a salient period of polarizing discourse in the early phase of the pandemic. In particular, we examine regional variations in both the United States and Canada, focusing on three specific health interventions: lockdowns, masks, and vaccines. We find that more politically conservative regions had higher levels of partisan polarization in both countries, especially in the US where a strong negative correlation exists between regional vaccination rates and degree of polarization in vaccine related discussions. We then analyze the timing, context, and profile of spikes in polarization, linking them to specific events discussed on social media across different regions in both countries. These typically last only a few days in duration, suggesting that online discussions reflect and could even drive changes in public opinion, which in the context of pandemic response impacts public health outcomes across different regions and over time.
Political Dynasties in Canada
Alex B. Rivard
Marc André Bodet
Using a unique dataset of legislators' electoral and biographical data in the Canadian provinces of Ontario, Quebec, New Brunswick, Nova Sco… (voir plus)tia and the federal parliament, this article analyses the extent to which family dynasties affected the career development of legislators since the mid-18th century. We find that the prevalence of dynasties was higher in provincial legislatures than it was in the federal parliament, that the number of dynasties in the Senate increased until the mid-20th century, and that the proportion of dynastic legislators at the subnational level was similar to the numbers seen in the United Kingdom during the early 19th century. Our results confirm the existence of a clear career benefit in terms of cabinet and senate appointments. In contrast to the American case and in line with the United Kingdom experience, we find no causal relationship between a legislator's tenure length and the presence of a dynasty.
A Comprehensive Dataset of Four Provincial Legislative Assembly Members
Alex B. Rivard
Marc André Bodet
Éric Montigny
This research note reports on a new dataset about legislators in four Canadian provinces since the establishment of their colonial assemblie… (voir plus)s in the eighteenth century. Over 7,000 legislators from Ontario, Quebec, New Brunswick, and Nova Scotia are included, with consolidated information drawn from multiple sources about parliamentarians’ years of birth and death, religion, electoral performance, kinship, and several other biographical indicators. We also illustrate the utility of such data with the help of a few descriptive examples drawn from the four provinces. We believe this consolidated dataset offers several opportunities for future research on representation, legislative activities and party politics.
Combining Confidence Elicitation and Sample-based Methods for Uncertainty Quantification in Misinformation Mitigation
Mauricio Rivera
Kellin Pelrine
Combining Confidence Elicitation and Sample-based Methods for Uncertainty Quantification in Misinformation Mitigation
Mauricio Rivera
Kellin Pelrine
Large Language Models have emerged as prime candidates to tackle misinformation mitigation. However, existing approaches struggle with hallu… (voir plus)cinations and overconfident predictions. We propose an uncertainty quantification framework that leverages both direct confidence elicitation and sampled-based consistency methods to provide better calibration for NLP misinformation mitigation solutions. We first investigate the calibration of sample-based consistency methods that exploit distinct features of consistency across sample sizes and stochastic levels. Next, we evaluate the performance and distributional shift of a robust numeric verbalization prompt across single vs. two-step confidence elicitation procedure. We also compare the performance of the same prompt with different versions of GPT and different numerical scales. Finally, we combine the sample-based consistency and verbalized methods to propose a hybrid framework that yields a better uncertainty estimation for GPT models. Overall, our work proposes novel uncertainty quantification methods that will improve the reliability of Large Language Models in misinformation mitigation applications.
Comparing GPT-4 and Open-Source Language Models in Misinformation Mitigation
Tyler Vergho
Kellin Pelrine
Recent large language models (LLMs) have been shown to be effective for misinformation detection. However, the choice of LLMs for experiment… (voir plus)s varies widely, leading to uncertain conclusions. In particular, GPT-4 is known to be strong in this domain, but it is closed source, potentially expensive, and can show instability between different versions. Meanwhile, alternative LLMs have given mixed results. In this work, we show that Zephyr-7b presents a consistently viable alternative, overcoming key limitations of commonly used approaches like Llama-2 and GPT-3.5. This provides the research community with a solid open-source option and shows open-source models are gradually catching up on this task. We then highlight how GPT-3.5 exhibits unstable performance, such that this very widely used model could provide misleading results in misinformation detection. Finally, we validate new tools including approaches to structured output and the latest version of GPT-4 (Turbo), showing they do not compromise performance, thus unlocking them for future research and potentially enabling more complex pipelines for misinformation mitigation.
Comparing GPT-4 and Open-Source Language Models in Misinformation Mitigation
Tyler Vergho
Kellin Pelrine
Recent large language models (LLMs) have been shown to be effective for misinformation detection. However, the choice of LLMs for experiment… (voir plus)s varies widely, leading to uncertain conclusions. In particular, GPT-4 is known to be strong in this domain, but it is closed source, potentially expensive, and can show instability between different versions. Meanwhile, alternative LLMs have given mixed results. In this work, we show that Zephyr-7b presents a consistently viable alternative, overcoming key limitations of commonly used approaches like Llama-2 and GPT-3.5. This provides the research community with a solid open-source option and shows open-source models are gradually catching up on this task. We then highlight how GPT-3.5 exhibits unstable performance, such that this very widely used model could provide misleading results in misinformation detection. Finally, we validate new tools including approaches to structured output and the latest version of GPT-4 (Turbo), showing they do not compromise performance, thus unlocking them for future research and potentially enabling more complex pipelines for misinformation mitigation.
Comparing GPT-4 and Open-Source Language Models in Misinformation Mitigation
Tyler Vergho
Kellin Pelrine
Recent large language models (LLMs) have been shown to be effective for misinformation detection. However, the choice of LLMs for experiment… (voir plus)s varies widely, leading to uncertain conclusions. In particular, GPT-4 is known to be strong in this domain, but it is closed source, potentially expensive, and can show instability between different versions. Meanwhile, alternative LLMs have given mixed results. In this work, we show that Zephyr-7b presents a consistently viable alternative, overcoming key limitations of commonly used approaches like Llama-2 and GPT-3.5. This provides the research community with a solid open-source option and shows open-source models are gradually catching up on this task. We then highlight how GPT-3.5 exhibits unstable performance, such that this very widely used model could provide misleading results in misinformation detection. Finally, we validate new tools including approaches to structured output and the latest version of GPT-4 (Turbo), showing they do not compromise performance, thus unlocking them for future research and potentially enabling more complex pipelines for misinformation mitigation.
Uncertainty Resolution in Misinformation Detection
Yury Orlovskiy
Camille Thibault
Anne Imouza
Kellin Pelrine
An Evaluation of Language Models for Hyperpartisan Ideology Detection in Persian Twitter
Sahar Omidi Shayegan
Isar Nejadgholi
Kellin Pelrine
Hao Yu
Sacha Lévy
Zachary Yang
Large Language Models (LLMs) have shown significant promise in various tasks, including identifying the political beliefs of English-speakin… (voir plus)g social media users from their posts. However, assessing LLMs for this task in non-English languages remains unexplored. In this work, we ask to what extent LLMs can predict the political ideologies of users in Persian social media. To answer this question, we first acknowledge that political parties are not well-defined among Persian users, and therefore, we simplify the task to a much simpler task of hyperpartisan ideology detection. We create a new benchmark and show the potential and limitations of both open-source and commercial LLMs in classifying the hyper-partisan ideologies of users. We compare these models with smaller fine-tuned models, both on the Persian language (ParsBERT) and translated data (RoBERTa), showing that they considerably outperform generative LLMs in this task. We further demonstrate that the performance of the generative LLMs degrades when classifying users based on their tweets instead of their bios and even when tweets are added as additional information, whereas the smaller fine-tuned models are robust and achieve similar performance for all classes. This study is a first step toward political ideology detection in Persian Twitter, with implications for future research to understand the dynamics of ideologies in Persian social media.
An Evaluation of Language Models for Hyperpartisan Ideology Detection in Persian Twitter
Sahar Omidi Shayegan
Isar Nejadgholi
Kellin Pelrine
Hao Yu
Sacha Lévy
Zachary Yang
Large Language Models (LLMs) have shown significant promise in various tasks, including identifying the political beliefs of English-speakin… (voir plus)g social media users from their posts. However, assessing LLMs for this task in non-English languages remains unexplored. In this work, we ask to what extent LLMs can predict the political ideologies of users in Persian social media. To answer this question, we first acknowledge that political parties are not well-defined among Persian users, and therefore, we simplify the task to a much simpler task of hyperpartisan ideology detection. We create a new benchmark and show the potential and limitations of both open-source and commercial LLMs in classifying the hyper-partisan ideologies of users. We compare these models with smaller fine-tuned models, both on the Persian language (ParsBERT) and translated data (RoBERTa), showing that they considerably outperform generative LLMs in this task. We further demonstrate that the performance of the generative LLMs degrades when classifying users based on their tweets instead of their bios and even when tweets are added as additional information, whereas the smaller fine-tuned models are robust and achieve similar performance for all classes. This study is a first step toward political ideology detection in Persian Twitter, with implications for future research to understand the dynamics of ideologies in Persian social media.
Quantifying learning-style adaptation in effectiveness of LLM teaching
Ruben Weijers
Gabrielle Fidelis de Castilho
Kellin Pelrine
This preliminary study aims to investigate whether AI, when prompted based on individual learning styles, can effectively improve comprehens… (voir plus)ion and learning experiences in educational settings. It involves tailoring LLMs baseline prompts and comparing the results of a control group receiving standard content and an experimental group receiving learning style-tailored content. Preliminary results suggest that GPT-4 can generate responses aligned with various learning styles, indicating the potential for enhanced engagement and comprehension. However, these results also reveal challenges, including the model’s tendency for sycophantic behavior and variability in responses. Our findings suggest that a more sophisticated prompt engineering approach is required for integrating AI into education (AIEd) to improve educational outcomes.