Portrait of Jackie Cheung

Jackie Cheung

Core Academic Member
Canada CIFAR AI Chair
Associate Scientific Director, Mila, Associate Professor, McGill University, School of Computer Science
Consultant Researcher, Microsoft Research

Biography

I am an associate professor in the School of Computer Science at McGill University and a consultant researcher at Microsoft Research.

My group investigates natural language processing, an area of AI research that builds computational models of human languages, such as English or French. The goal of our research is to develop computational methods for understanding text and speech in order to generate language that is fluent and context appropriate.

In our lab, we investigate statistical machine learning techniques for analyzing and making predictions about language. Some of my current projects focus on summarizing fiction, extracting events from text, and adapting language across genres.

Current Students

Rahul Aralikatte
Postdoctorate - McGill University
rahul.aralikatte@mila.quebec
Kushal Arora
PhD - McGill University
Co-supervisor :
arorakus@mila.quebec
Ines Arous
Postdoctorate - McGill University
ines.arous@mila.quebec
Yu Bai
Research Intern - McGill University
yu.bai@mila.quebec
Meng (Caden) Cao
PhD - McGill University
caomeng@mila.quebec
Aishik Chakraborty
PhD - McGill University
chakraba@mila.quebec
Ziling Cheng
Research Intern - McGill University
ziling.cheng@mila.quebec
Andre Cianflone
PhD - McGill University
cianfloa@mila.quebec
Maxime Darrin
PhD - McGill University
Co-supervisor :
maxime.darrin@mila.quebec
Bonaventure Dossou
PhD - McGill University
bonaventure.dossou@mila.quebec
Aylin Erman
Master's Research - McGill University
Co-supervisor :
aylin.erman@mila.quebec
Ori Ernst
Postdoctorate - McGill University
ori.ernst@mila.quebec
Jules Gagnon-marchand
Master's Research - McGill University
gagnonju@mila.quebec
Steven Koniaev
Research Intern - McGill University
steven.koniaev@mila.quebec
Zichao Li
PhD - McGill University
Principal supervisor :
zichao.li@mila.quebec
Yu Lu Liu
Master's Research - McGill University
yu-lu.liu@mila.quebec
Caleb Moses
PhD - McGill University
caleb.moses@mila.quebec
Martin Pömsl
Master's Research - McGill University
martin.pomsl@mila.quebec
Ian Porada
PhD - McGill University
poradaia@mila.quebec
Haowei Qiu
Professional Master's - McGill University
haowei.qiu@mila.quebec
Michael Runningwolf
PhD - McGill University
michael.runningwolf@mila.quebec
Cesare Spinoso-Di Piano
PhD - McGill University
cesare.spinoso@mila.quebec
Nathan Zeweniuk
Research Intern - McGill University University
nathan.zeweniuk@mila.quebec
Xiyuan Zou
Research Intern - McGill University
xiyuan.zou@mila.quebec

Publications

Ensemble Distillation for Unsupervised Constituency Parsing
Behzad Shayegh
Yanshuai Cao
Xiaodan Zhu
Lili Mou
Balaur: Language Model Pretraining with Lexical Semantic Relations
Andrei Mircea
Qualitative Code Suggestion: A Human-Centric Approach to Qualitative Coding
Qualitative coding is a content analysis method in which researchers read through a text corpus and assign descriptive labels or qualitative… (see more) codes to passages. It is an arduous and manual process which human-computer interaction (HCI) studies have shown could greatly benefit from NLP techniques to assist qualitative coders. Yet, previous attempts at leveraging language technologies have set up qualitative coding as a fully automatable classification problem. In this work, we take a more assistive approach by defining the task of qualitative code suggestion (QCS) in which a ranked list of previously assigned qualitative codes is suggested from an identified passage. In addition to being user-motivated, QCS integrates previously ignored properties of qualitative coding such as the sequence in which passages are annotated, the importance of rare codes and the differences in annotation styles between coders. We investigate the QCS task by releasing the first publicly available qualitative coding dataset, CVDQuoding, consisting of interviews conducted with women at risk of cardiovascular disease. In addition, we conduct a human evaluation which shows that our systems consistently make relevant code suggestions.
Investigating the Effect of Pre-finetuning BERT Models on NLI Involving Presuppositions
Jad Kabbara
Responsible AI Considerations in Text Summarization Research: A Review of Current Practices
Yu Lu Liu
Meng Cao
Su Lin Blodgett
Adam Trischler
AI and NLP publication venues have increasingly encouraged researchers to reflect on possible ethical considerations, adverse impacts, and o… (see more)ther responsible AI issues their work might engender. However, for specific NLP tasks our understanding of how prevalent such issues are, or when and why these issues are likely to arise, remains limited. Focusing on text summarization—a common NLP task largely overlooked by the responsible AI community—we examine research and reporting practices in the current literature. We conduct a multi-round qualitative analysis of 333 summarization papers from the ACL Anthology published between 2020–2022. We focus on how, which, and when responsible AI issues are covered, which relevant stakeholders are considered, and mismatches between stated and realized research goals. We also discuss current evaluation practices and consider how authors discuss the limitations of both prior work and their own work. Overall, we find that relatively few papers engage with possible stakeholders or contexts of use, which limits their consideration of potential downstream adverse impacts or other responsible AI issues. Based on our findings, we make recommendations on concrete practices and research directions.
Vārta: A Large-Scale Headline-Generation Dataset for Indic Languages
Rahul Aralikatte
Ziling Cheng
Sumanth Doddapaneni
We present V\=arta, a large-scale multilingual dataset for headline generation in Indic languages. This dataset includes 41.8 million news a… (see more)rticles in 14 different Indic languages (and English), which come from a variety of high-quality sources. To the best of our knowledge, this is the largest collection of curated articles for Indic languages currently available. We use the data collected in a series of experiments to answer important questions related to Indic NLP and multilinguality research in general. We show that the dataset is challenging even for state-of-the-art abstractive models and that they perform only slightly better than extractive baselines. Owing to its size, we also show that the dataset can be used to pretrain strong language models that outperform competitive baselines in both NLU and NLG benchmarks.
Missing Information, Unresponsive Authors, Experimental Flaws: The Impossibility of Assessing the Reproducibility of Previous Human Evaluations in NLP
Anya Belz
Craig Thomson
Ehud Reiter
Gavin Abercrombie
Jose M. Alonso-moral
Mohammad Arvan
Mark Cieliebak
Elizabeth Clark
Kees Van Deemter
Tanvi Dinkar
Ondrej Dusek
Steffen Eger
Qixiang Fang
Albert Gatt
Dimitra Gkatzia
Javier Gonz'alez-Corbelle
Dirk Hovy
Manuela Hurlimann
Takumi Ito … (see 19 more)
John D. Kelleher
Filip Klubicka
Huiyuan Lai
Chris van der Lee
Emiel van Miltenburg
Yiru Li
Saad Mahamood
Margot Mieskes
Malvina Nissim
Natalie Paige Parde
Ondvrej Pl'atek
Verena Teresa Rieser
Pablo Mosteiro Romero
Joel Joel Tetreault
Antonio Toral
Xiao-Yi Wan
Leo Wanner
Lewis Joshua Watson
Diyi Yang
We report our efforts in identifying a set of previous human evaluations in NLP that would be suitable for a coordinated study examining wha… (see more)t makes human evaluations in NLP more/less reproducible. We present our results and findings, which include that just 13% of papers had (i) sufficiently low barriers to reproduction, and (ii) enough obtainable information, to be considered for reproduction, and that all but one of the experiments we selected for reproduction was discovered to have flaws that made the meaningfulness of conducting a reproduction questionable. As a result, we had to change our coordinated study design from a reproduce approach to a standardise-then-reproduce-twice approach. Our overall (negative) finding that the great majority of human evaluations in NLP is not repeatable and/or not reproducible and/or too flawed to justify reproduction, paints a dire picture, but presents an opportunity for a rethink about how to design and report human evaluations in NLP.
Investigating Failures to Generalize for Coreference Resolution Models
Ian Porada
Kaheer Suleman
Adam Trischler
Coreference resolution models are often evaluated on multiple datasets. Datasets vary, however, in how coreference is realized -- i.e., how … (see more)the theoretical concept of coreference is operationalized in the dataset -- due to factors such as the choice of corpora and annotation guidelines. We investigate the extent to which errors of current coreference resolution models are associated with existing differences in operationalization across datasets (OntoNotes, PreCo, and Winogrande). Specifically, we distinguish between and break down model performance into categories corresponding to several types of coreference, including coreferring generic mentions, compound modifiers, and copula predicates, among others. This break down helps us investigate how state-of-the-art models might vary in their ability to generalize across different coreference types. In our experiments, for example, models trained on OntoNotes perform poorly on generic mentions and copula predicates in PreCo. Our findings help calibrate expectations of current coreference resolution models; and, future work can explicitly account for those types of coreference that are empirically associated with poor generalization when developing models.
Systematic Rectification of Language Models via Dead-end Analysis
Meng Cao
Mehdi Fatemi
Samira Shabanian
With adversarial or otherwise normal prompts, existing large language models (LLM) can be pushed to generate toxic discourses. One way to re… (see more)duce the risk of LLMs generating undesired discourses is to alter the training of the LLM. This can be very restrictive due to demanding computation requirements. Other methods rely on rule-based or prompt-based token elimination, which are limited as they dismiss future tokens and the overall meaning of the complete discourse. Here, we center detoxification on the probability that the finished discourse is ultimately considered toxic. That is, at each point, we advise against token selections proportional to how likely a finished text from this point will be toxic. To this end, we formally extend the dead-end theory from the recent reinforcement learning (RL) literature to also cover uncertain outcomes. Our approach, called rectification, utilizes a separate but significantly smaller model for detoxification, which can be applied to diverse LLMs as long as they share the same vocabulary. Importantly, our method does not require access to the internal representations of the LLM, but only the token probability distribution at each decoding step. This is crucial as many LLMs today are hosted in servers and only accessible through APIs. When applied to various LLMs, including GPT-3, our approach significantly improves the generated discourse compared to the base LLMs and other techniques in terms of both the overall language and detoxification performance.
Evaluating Dependencies in Fact Editing for Language Models: Specificity and Implication Awareness
Zichao Li
Ines Arous
The potential of using a large language model (LLM) as a knowledge base (KB) has sparked significant interest. To maintain the knowledge acq… (see more)uired by LLMs, we need to ensure that the editing of learned facts respects internal logical constraints, which are known as dependency of knowledge. Existing work on editing LLMs has partially addressed the issue of dependency, when the editing of a fact should apply to its lexical variations without disrupting irrelevant ones. However, they neglect the dependency between a fact and its logical implications. We propose an evaluation protocol with an accompanying question-answering dataset, StandUp, that provides a comprehensive assessment of the editing process considering the above notions of dependency. Our protocol involves setting up a controlled environment in which we edit facts and monitor their impact on LLMs, along with their implications based on If-Then rules. Extensive experiments on StandUp show that existing knowledge editing methods are sensitive to the surface form of knowledge, and that they have limited performance in inferring the implications of edited facts.
How Useful Are Educational Questions Generated by Large Language Models?
Sabina Elkins
Ekaterina Kochmar
Iulian V. Serban
Responsible AI Considerations in Text Summarization Research: A Review of Current Practices
Yu Lu Liu
Meng Cao
Su Lin Blodgett
Adam Trischler