Portrait of Jean-François Godbout

Jean-François Godbout

Associate Academic Member
Full Professor, Université de Montréal
Research Topics
AI Safety
Disinformation
Generative Models

Biography

Jean-François Godbout is a professor at the Université de Montréal in the Department of Political Science and an Associate Academic Member at Mila - Quebec Artificial Intelligence Institute. His research is primarily focused on computational social science, AI safety, and the impact of generative AI on society. He is currently Director of the Data analysis undergraduate program in social sciences and humanities at the Université de Montréal and a researcher at IVADO.

Current Students

Postdoctorate - Université de Montréal
Master's Research - Université de Montréal
Co-supervisor :
Master's Research - Université de Montréal
Co-supervisor :

Publications

Towards Reliable Misinformation Mitigation: Generalization, Uncertainty, and GPT-4
Kellin Pelrine
Anne Imouza
Meilina Reksoprodjo
Camille Thibault
Caleb Gupta
Joel Christoph
Misinformation poses a critical societal challenge, and current approaches have yet to produce an effective solution. We propose focusing on… (see more) generalization, uncertainty, and how to leverage recent large language models, in order to create more practical tools to evaluate information veracity in contexts where perfect classification is impossible. We first demonstrate that GPT-4 can outperform prior methods in multiple settings and languages. Next, we explore generalization, revealing that GPT-4 and RoBERTa-large exhibit differences in failure modes. Third, we propose techniques to handle uncertainty that can detect impossible examples and strongly improve outcomes. We also discuss results on other language models, temperature, prompting, versioning, explainability, and web retrieval, each one providing practical insights and directions for future research. Finally, we publish the LIAR-New dataset with novel paired English and French misinformation data and Possibility labels that indicate if there is sufficient context for veracity evaluation. Overall, this research lays the groundwork for future tools that can drive real-world progress to combat misinformation.
Party Prediction for Twitter
Kellin Pelrine
Anne Imouza
Zachary Yang
Jacob-Junqi Tian
Sacha Lévy
Gabrielle Desrosiers-Brisebois
Aarash Feizi
C'ecile Amadoro
André Blais
Open, Closed, or Small Language Models for Text Classification?
Hao Yu
Zachary Yang
Kellin Pelrine
Recent advancements in large language models have demonstrated remarkable capabilities across various NLP tasks. But many questions remain, … (see more)including whether open-source models match closed ones, why these models excel or struggle with certain tasks, and what types of practical procedures can improve performance. We address these questions in the context of classification by evaluating three classes of models using eight datasets across three distinct tasks: named entity recognition, political party prediction, and misinformation detection. While larger LLMs often lead to improved performance, open-source models can rival their closed-source counterparts by fine-tuning. Moreover, supervised smaller models, like RoBERTa, can achieve similar or even greater performance in many datasets compared to generative LLMs. On the other hand, closed models maintain an advantage in hard tasks that demand the most generalizability. This study underscores the importance of model selection based on task requirements
Online Partisan Polarization of COVID-19
Zachary Yang
Anne Imouza
Kellin Pelrine
Sacha Lévy
Jiewen Liu
Gabrielle Desrosiers-Brisebois
André Blais
In today’s age of (mis)information, many people utilize various social media platforms in an attempt to shape public opinion on several im… (see more)portant issues, including elections and the COVID-19 pandemic. These two topics have recently become intertwined given the importance of complying with public health measures related to COVID-19 and politicians’ management of the pandemic. Motivated by this, we study the partisan polarization of COVID-19 discussions on social media. We propose and utilize a novel measure of partisan polarization to analyze more than 380 million posts from Twitter and Parler around the 2020 US presidential election. We find strong correlation between peaks in polarization and polarizing events, such as the January 6th Capitol Hill riot. We further classify each post into key COVID-19 issues of lockdown, masks, vaccines, as well as miscellaneous, to investigate both the volume and polarization on these topics and how they vary through time. Parler includes more negative discussions around lockdown and masks, as expected, but not much around vaccines. We also observe more balanced discussions on Twitter and a general disconnect between the discussions on Parler and Twitter.