Portrait of Jean-François Godbout

Jean-François Godbout

Associate Academic Member
Full Professor, Université de Montréal, Department of Political Science
Alumni
Research Topics
AI Safety
Disinformation
Generative Models

Biography

Jean-François Godbout is a professor at the Université de Montréal in the Department of Political Science and an Associate Academic Member at Mila - Quebec Artificial Intelligence Institute. His research is primarily focused on computational social science, AI safety, and the impact of generative AI on society. He is currently Director of the Data analysis undergraduate program in social sciences and humanities at the Université de Montréal and a researcher at IVADO.

Current Students

Master's Research - Université de Montréal
Research Intern - Université de Montréal
Co-supervisor :
PhD - McGill University
Co-supervisor :
Postdoctorate - Université de Montréal
PhD - Université de Montréal
Master's Research - Université de Montréal
Co-supervisor :
Master's Research - Université de Montréal
Co-supervisor :

Publications

An Evaluation of Language Models for Hyperpartisan Ideology Detection in Persian Twitter
Large Language Models (LLMs) have shown significant promise in various tasks, including identifying the political beliefs of English-speakin… (see more)g social media users from their posts. However, assessing LLMs for this task in non-English languages remains unexplored. In this work, we ask to what extent LLMs can predict the political ideologies of users in Persian social media. To answer this question, we first acknowledge that political parties are not well-defined among Persian users, and therefore, we simplify the task to a much simpler task of hyperpartisan ideology detection. We create a new benchmark and show the potential and limitations of both open-source and commercial LLMs in classifying the hyper-partisan ideologies of users. We compare these models with smaller fine-tuned models, both on the Persian language (ParsBERT) and translated data (RoBERTa), showing that they considerably outperform generative LLMs in this task. We further demonstrate that the performance of the generative LLMs degrades when classifying users based on their tweets instead of their bios and even when tweets are added as additional information, whereas the smaller fine-tuned models are robust and achieve similar performance for all classes. This study is a first step toward political ideology detection in Persian Twitter, with implications for future research to understand the dynamics of ideologies in Persian social media.
Quantifying learning-style adaptation in effectiveness of LLM teaching
Ruben Weijers
Gabrielle Fidelis de Castilho
This preliminary study aims to investigate whether AI, when prompted based on individual learning styles, can effectively improve comprehens… (see more)ion and learning experiences in educational settings. It involves tailoring LLMs baseline prompts and comparing the results of a control group receiving standard content and an experimental group receiving learning style-tailored content. Preliminary results suggest that GPT-4 can generate responses aligned with various learning styles, indicating the potential for enhanced engagement and comprehension. However, these results also reveal challenges, including the model’s tendency for sycophantic behavior and variability in responses. Our findings suggest that a more sophisticated prompt engineering approach is required for integrating AI into education (AIEd) to improve educational outcomes.
Towards Reliable Misinformation Mitigation: Generalization, Uncertainty, and GPT-4
Meilina Reksoprodjo
Caleb Gupta
Joel Christoph
Published online: 24 May 2023
Open, Closed, or Small Language Models for Text Classification?
Recent advancements in large language models have demonstrated remarkable capabilities across various NLP tasks. But many questions remain, … (see more)including whether open-source models match closed ones, why these models excel or struggle with certain tasks, and what types of practical procedures can improve performance. We address these questions in the context of classification by evaluating three classes of models using eight datasets across three distinct tasks: named entity recognition, political party prediction, and misinformation detection. While larger LLMs often lead to improved performance, open-source models can rival their closed-source counterparts by fine-tuning. Moreover, supervised smaller models, like RoBERTa, can achieve similar or even greater performance in many datasets compared to generative LLMs. On the other hand, closed models maintain an advantage in hard tasks that demand the most generalizability. This study underscores the importance of model selection based on task requirements
Party Prediction for Twitter
Sacha Lévy
Gabrielle Desrosiers-Brisebois
Cécile Amadoro
Andre Blais
A large number of studies on social media compare the behaviour of users from different political parties. As a basic step, they employ a pr… (see more)edictive model for inferring their political affiliation. The accuracy of this model can change the conclusions of a downstream analysis significantly, yet the choice between different models seems to be made arbitrarily. In this paper, we provide a comprehensive survey and an empirical comparison of the current party prediction practices and propose several new approaches which are competitive with or outperform state-of-the-art methods, yet require less computational resources. Party prediction models rely on the content generated by the users (e.g., tweet texts), the relations they have (e.g., who they follow), or their activities and interactions (e.g., which tweets they like). We examine all of these and compare their signal strength for the party prediction task. This paper lets the practitioner select from a wide range of data types that all give strong performance. Finally, we conduct extensive experiments on different aspects of these methods, such as data collection speed and transfer capabilities, which can provide further insights for both applied and methodological research.
Online Partisan Polarization of COVID-19
Sacha Lévy
Gabrielle Desrosiers-Brisebois
Andre Blais
In today’s age of (mis)information, many people utilize various social media platforms in an attempt to shape public opinion on sever… (see more)al important issues, including elections and the COVID-19 pandemic. These two topics have recently become intertwined given the importance of complying with public health measures related to COVID-19 and politicians’ management of the pandemic. Motivated by this, we study the partisan polarization of COVID-19 discussions on social media. We propose and utilize a novel measure of partisan polarization to analyze more than 380 million posts from Twitter and Parler around the 2020 US presidential election. We find strong correlation between peaks in polarization and polarizing events, such as the January 6th Capitol Hill riot. We further classify each post into key COVID-19 issues of lockdown, masks, vaccines, as well as miscellaneous, to investigate both the volume and polarization on these topics and how they vary through time. Parler includes more negative discussions around lockdown and masks, as expected, but not much around vaccines. We also observe more balanced discussions on Twitter and a general disconnect between the discussions on Parler and Twitter.