Portrait of Siva Reddy

Siva Reddy

Core Academic Member
Canada CIFAR AI Chair
Assistant Professor, McGill University, School of Computer Science and Department of Linguistics
Research Topics
Deep Learning
Natural Language Processing
Reasoning
Representation Learning

Biography

Siva Reddy is an assistant professor at the School of Computer Science and in the Department of Linguistics at McGill University. He completed a postdoc with the Stanford NLP Group in September 2019.

Reddy’s research goal is to enable machines with natural language understanding abilities in order to facilitate applications like question answering and conversational systems. His expertise includes building symbolic (linguistic and induced) and deep learning models for language.

Current Students

PhD - McGill University
Master's Research - McGill University
PhD - McGill University
Collaborating researcher - McGill University
Postdoctorate - University of Edinburgh
Collaborating researcher
Research Intern - McGill University
Independent visiting researcher
Co-supervisor :
Master's Research - McGill University
Co-supervisor :
Collaborating researcher
PhD - McGill University
Co-supervisor :
Collaborating researcher - INSA Lyon, France
PhD - McGill University
Principal supervisor :
PhD - McGill University
Co-supervisor :
PhD - McGill University
PhD - McGill University
Co-supervisor :
Master's Research - McGill University
Co-supervisor :
PhD - McGill University
Master's Research - McGill University
PhD - McGill University
Postdoctorate - McGill University
Master's Research - McGill University
PhD - McGill University
Principal supervisor :
Collaborating researcher - N/A
Research Intern - McGill University
Collaborating Alumni
Collaborating Alumni - McGill University
Collaborating researcher
Co-supervisor :
Research Intern - McGill University
Collaborating Alumni - McGill University
Research Intern - McGill University

Publications

Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models
Aarohi Srivastava
Abhinav Rastogi
Abhishek Rao
Abu Awal Md Shoeb
Abubakar Abid
Adam Fisch
Adam R. Brown
Adam Santoro
Aditya Gupta
Adrià Garriga-Alonso
Agnieszka Kluska
Aitor Lewkowycz
Akshat Agarwal
Alethea Power
Alex Ray
Alex Warstadt
Alexander W. Kocurek
Ali Safaya
Ali Tazarv
Alice Xiang … (see 432 more)
Alicia Parrish
Allen Nie
Aman Hussain
Amanda Askell
Amanda Dsouza
Ambrose Slone
Ameet Rahane
Anantharaman S. Iyer
Anders Johan Andreassen
Andrea Madotto
Andrea Santilli
Andreas Stuhlmüller
Andrew M. Dai
Andrew La
Andrew Lampinen
Andy Zou
Angela Jiang
Angelica Chen
Anh Vuong
Animesh Gupta
Anna Gottardi
Antonio Norelli
Anu Venkatesh
Arash Gholamidavoodi
Arfa Tabassum
Arul Menezes
Arun Kirubarajan
Asher Mullokandov
Ashish Sabharwal
Austin Herrick
Avia Efrat
Aykut Erdem
Ayla Karakaş
B. Ryan Roberts
Bao Sheng Loe
Barret Zoph
Bartłomiej Bojanowski
Batuhan Özyurt
Behnam Hedayatnia
Behnam Neyshabur
Benjamin Inden
Benno Stein
Berk Ekmekci
Bill Yuchen Lin
Blake Howald
Bryan Orinion
Cameron Diao
Cameron Dour
Catherine Stinson
Cedrick Argueta
Cesar Ferri
Chandan Singh
Charles Rathkopf
Chenlin Meng
Chitta Baral
Chiyu Wu
Chris Callison-Burch
Christopher Waites
Christian Voigt
Christopher D Manning
Christopher Potts
Cindy Ramirez
Clara E. Rivera
Clemencia Siro
Colin Raffel
Courtney Ashcraft
Cristina Garbacea
Damien Sileo
Dan Garrette
Dan Hendrycks
Dan Kilman
Dan Roth
C. Daniel Freeman
Daniel Khashabi
Daniel Moseguí González
Danielle Perszyk
Danny Hernandez
Danqi Chen
Daphne Ippolito
Dar Gilboa
David Dohan
David Drakard
David Jurgens
Debajyoti Datta
Deep Ganguli
Denis Emelin
Denis Kleyko
Deniz Yuret
Derek Chen
Derek Tam
Dieuwke Hupkes
Dilyar Buzan
Dimitri Coelho Mollo
Diyi Yang
Dong-Ho Lee
Dylan Schrader
Ekaterina Shutova
Ekin Dogus Cubuk
Elad Segal
Eleanor Hagerman
Elizabeth Barnes
Elizabeth Donoway
Ellie Pavlick
Emanuele Rodolà
Emma Lam
Eric Chu
Eric Tang
Erkut Erdem
Ernie Chang
Ethan A Chi
Ethan Dyer
Ethan Jerzak
Ethan Kim
Eunice Engefu Manyasi
Evgenii Zheltonozhskii
Fanyue Xia
Fatemeh Siar
Fernando Martínez-Plumed
Francesca Happé
François Chollet
Frieda Rong
Gaurav Mishra
Genta Indra Winata
Gerard de Melo
Germán Kruszewski
Giambattista Parascandolo
Giorgio Mariani
Gloria Xinyue Wang
Gonzalo Jaimovitch-Lopez
Gregor Betz
Guy Gur-Ari
Hana Galijasevic
Hannah Kim
Hannah Rashkin
Hannaneh Hajishirzi
Harsh Mehta
Hayden Bogar
Henry Shevlin
Henry Francis Anthony Shevlin
Hinrich Schuetze
Hiromu Yakura
Hongming Zhang
Hugh Mee Wong
Ian Ng
Isaac Noble
Jaap Jumelet
Jack Geissinger
Jackson Kernion
Jacob Hilton
Jaehoon Lee
Jaime Fernández Fisac
James B Simon
James Koppel
James Zheng
James Zou
Jan Kocon
Jana Thompson
Janelle Wingfield
Jared Kaplan
Jarema Radom
Jascha Sohl-Dickstein
Jason Phang
Jason Wei
Jekaterina Novikova
Jelle Bosscher
Jennifer Marsh
Jeremy Kim
Jeroen Taal
Jesse Engel
Jesujoba Oluwadara Alabi
Jiacheng Xu
Jiaming Song
Jillian Tang
Joan Waweru
John Burden
John Miller
John U. Balis
Jonathan Batchelder
Jonathan Berant
Jörg Frohberg
Jos Rozen
Jose Hernandez-Orallo
Joseph Boudeman
Joseph Guerr
Joseph Jones
Joshua B. Tenenbaum
Joshua S. Rule
Joyce Chua
Kamil Kanclerz
Karen Livescu
Karl Krauth
Karthik Gopalakrishnan
Katerina Ignatyeva
Katja Markert
Kaustubh Dhole
Kevin Gimpel
Kevin Omondi
Kory Mathewson
Kristen Chiafullo
Ksenia Shkaruta
Kumar Shridhar
Kyle McDonell
Kyle Richardson
Laria Reynolds
Leo Gao
Ling Zhang
Liam Dugan
Lianhui Qin
Lidia Contreras-Ochando
Louis-Philippe Morency
Luca Moschella
Lucas Lam
Lucy Noble
Ludwig Schmidt
Luheng He
Luis Oliveros-Colón
Luke Metz
Lütfi Kerem Senel
Maarten Bosma
Maarten Sap
Maartje Ter Hoeve
Maheen Farooqi
Manaal Faruqui
Mantas Mazeika
Marco Baturan
Marco Marelli
Marco Maru
Maria Jose Ramirez-Quintana
Marie Tolkiehn
Mario Giulianelli
Martha Lewis
Martin Potthast
Matthew L Leavitt
Matthias Hagen
Mátyás Schubert
Medina Orduna Baitemirova
Melody Arnaud
Melvin McElrath
Michael Andrew Yee
Michael Cohen
Michael Gu
Michael Ivanitskiy
Michael Starritt
Michael Strube
Michał Swędrowski
Michele Bevilacqua
Michihiro Yasunaga
Mihir Kale
Mike Cain
Mimee Xu
Mirac Suzgun
Mitch Walker
Mo Tiwari
Mohit Bansal
Moin Aminnaseri
Mor Geva
Mozhdeh Gheini
Mukund Varma T
Nanyun Peng
Nathan Andrew Chi
Nayeon Lee
Neta Gur-Ari Krakover
Nicholas Cameron
Nicholas Roberts
Nick Doiron
Nicole Martinez
Nikita Nangia
Niklas Deckers
Niklas Muennighoff
Nitish Shirish Keskar
Niveditha S. Iyer
Noah Constant
Noah Fiedel
Nuan Wen
Oliver Zhang
Omar Agha
Omar Elbaghdadi
Omer Levy
Owain Evans
Pablo Antonio Moreno Casares
Parth Doshi
Pascale Fung
Paul Pu Liang
Paul Vicol
Pegah Alipoormolabashi
Peiyuan Liao
Percy Liang
Peter W Chang
Peter Eckersley
Phu Mon Htut
Pinyu Hwang
Pi-Bei Hwang
Piotr Miłkowski
Piyush Patil
Pouya Pezeshkpour
Priti Oli
Qiaozhu Mei
Qing Lyu
Qinlang Chen
Rabin Banjade
Rachel Etta Rudolph
Raefer Gabriel
Rahel Habacker
Ramon Risco
Raphaël Millière
Rhythm Garg
Richard Barnes
Rif A. Saurous
Riku Arakawa
Robbe Raymaekers
Robert Frank
Rohan Sikand
Roman Novak
Roman Sitelew
Ronan Le Bras
Rosanne Liu
Rowan Jacobs
Rui Zhang
Russ Salakhutdinov
Ryan Andrew Chi
Seungjae Ryan Lee
Ryan Stovall
Ryan Teehan
Rylan Yang
Sahib Singh
Saif Mohammad
Sajant Anand
Sam Dillavou
Sam Shleifer
Sam Wiseman
Samuel Gruetter
Samuel R. Bowman
Samuel Stern Schoenholz
Sanghyun Han
Sanjeev Kwatra
Sarah A. Rous
Sarik Ghazarian
Sayan Ghosh
Sean Casey
Sebastian Bischoff
Sebastian Gehrmann
Sebastian Schuster
Sepideh Sadeghi
Shadi Hamdan
Sharon Zhou
Shashank Srivastava
Sherry Shi
Shikhar Singh
Shima Asaadi
Shixiang Shane Gu
Shubh Pachchigar
Shubham Toshniwal
Shyam Upadhyay
Shyamolima Shammie Debnath
Siamak Shakeri
Simon Thormeyer
Simone Melzi
Sneha Priscilla Makini
Soo-Hwan Lee
Spencer Torene
Sriharsha Hatwar
Stanislas Dehaene
Stefan Divic
Stefano Ermon
Stella Biderman
Stephanie Lin
Stephen Prasad
Steven Piantadosi
Stuart Shieber
Summer Misherghi
Svetlana Kiritchenko
Swaroop Mishra
Tal Linzen
Tal Schuster
Tao Li
Tao Yu
Tariq Ali
Tatsunori Hashimoto
Te-Lin Wu
Théo Desbordes
Theodore Rothschild
Thomas Phan
Tianle Wang
Tiberius Nkinyili
Timo Schick
Timofei Kornev
Titus Tunduny
Tobias Gerstenberg
Trenton Chang
Trishala Neeraj
Tushar Khot
Tyler Shultz
Uri Shaham
Vedant Misra
Vera Demberg
Victoria Nyamai
Vikas Raunak
Vinay Venkatesh Ramasesh
vinay uday prabhu
Vishakh Padmakumar
Vivek Srikumar
William Fedus
William Saunders
Wout Vossen
Xiang Ren
Xiaoyu Tong
Xinran Zhao
Xinyi Wu
Xudong Shen
Yadollah Yaghoobzadeh
Yair Lakretz
Yangqiu Song
Yasaman Bahri
Yejin Choi
Yichi Yang
Sophie Hao
Yiding Hao
Yifu Chen
Yonatan Belinkov
Yufang Hou
Yuntao Bai
Zachary Seid
Zhuoye Zhao
Zijian Wang
Zijie J. Wang
Zirui Wang
Ziyi Wu
Combining Parameter-efficient Modules for Task-level Generalisation
A modular design encourages neural models to disentangle and recombine different facets of knowledge to generalise more systematically to ne… (see more)w tasks. In this work, we assume that each task is associated with a subset of latent skills from an (arbitrary size) inventory. In turn, each skill corresponds to a parameter-efficient (sparse / low-rank) model adapter. By jointly learning adapters and a routing function that allocates skills to each task, the full network is instantiated as the average of the parameters of active skills. We propose several inductive biases that encourage re-usage and composition of the skills, including variable-size skill allocation and a dual-speed learning rate. We evaluate our latent-skill model in two main settings: 1) multitask reinforcement learning for instruction following on 8 levels of the BabyAI platform; and 2) few-shot fine-tuning of language models on 160 NLP tasks of the CrossFit benchmark. We find that the modular design of our network enhances sample efficiency in reinforcement learning and few-shot generalisation in supervised learning, compared to a series of baselines. These include models where parameters are fully shared, task-specific, conditionally generated (HyperFormer), or sparse mixture-of-experts (TaskMoE).
Evaluating Dependencies in Fact Editing for Language Models: Specificity and Implication Awareness
Jackie CK Cheung
The potential of using a large language model (LLM) as a knowledge base (KB) has sparked significant interest. To maintain the knowledge acq… (see more)uired by LLMs, we need to ensure that the editing of learned facts respects internal logical constraints, which are known as dependency of knowledge. Existing work on editing LLMs has partially addressed the issue of dependency, when the editing of a fact should apply to its lexical variations without disrupting irrelevant ones. However, they neglect the dependency between a fact and its logical implications. We propose an evaluation protocol with an accompanying question-answering dataset, StandUp, that provides a comprehensive assessment of the editing process considering the above notions of dependency. Our protocol involves setting up a controlled environment in which we edit facts and monitor their impact on LLMs, along with their implications based on If-Then rules. Extensive experiments on StandUp show that existing knowledge editing methods are sensitive to the surface form of knowledge, and that they have limited performance in inferring the implications of edited facts.
MAGNIFICo: Evaluating the In-Context Learning Ability of Large Language Models to Generalize to Novel Interpretations
StarCoder: may the source be with you!
Raymond Li
Loubna Ben allal
Yangtian Zi
Niklas Muennighoff
Denis Kocetkov
Chenghao Mou
Marc Marone
Christopher Akiki
Jia LI
Jenny Chim
Qian Liu
Evgenii Zheltonozhskii
Terry Yue Zhuo
Thomas Wang
Olivier Dehaene
Mishig Davaadorj
Joel Lamy-Poirier
Joao Monteiro
Oleh Shliazhko
Nicolas Gontier … (see 47 more)
Armel Zebaze
Ming-Ho Yee
Logesh Kumar Umapathi
Jian Zhu
Ben Lipkin
Muhtasham Oblokulov
Zhiruo Wang
Rudra Murthy
Jason T Stillerman
Siva Sankalp Patel
Dmitry Abulkhanov
Marco Zocca
Zhihan Zhang
N. Fahmy
Urvashi Bhattacharyya
Wenhao Yu
Swayam Singh
Paulo Villegas
M. Kunakov
Fedor Zhdanov
Manuel Romero
Tony Lee
Nadav Timor
Jennifer Ding
Claire S Schlesinger
Hailey Schoelkopf
Jan Ebert
Tri Dao
Mayank Mishra
Alex Gu
Jennifer Robinson
Carolyn Jane Anderson
Brendan Dolan-Gavitt
Danish Contractor
Daniel Fried
Yacine Jernite
Carlos Muñoz Ferrandis
Sean Hughes
Thomas Wolf
Arjun Guha
Leandro Von Werra
The BigCode community, an open-scientific collaboration working on the responsible development of Large Language Models for Code (Code LLMs)… (see more), introduces StarCoder and StarCoderBase: 15.5B parameter models with 8K context length, infilling capabilities and fast large-batch inference enabled by multi-query attention. StarCoderBase is trained on 1 trillion tokens sourced from The Stack, a large collection of permissively licensed GitHub repositories with inspection tools and an opt-out process. We fine-tuned StarCoderBase on 35B Python tokens, resulting in the creation of StarCoder. We perform the most comprehensive evaluation of Code LLMs to date and show that StarCoderBase outperforms every open Code LLM that supports multiple programming languages and matches or outperforms the OpenAI code-cushman-001 model. Furthermore, StarCoder outperforms every model that is fine-tuned on Python and still retains its performance on other programming languages. We take several important steps towards a safe open-access model release, including an improved PII redaction pipeline and a novel attribution tracing tool, and make the StarCoder models publicly available under a more commercially viable version of the Open Responsible AI Model license.
Syntactic Substitutability as Unsupervised Dependency Syntax
Syntax is a latent hierarchical structure which underpins the robust and compositional nature of human language. In this work, we explore th… (see more)e hypothesis that syntactic dependencies can be represented in language model attention distributions and propose a new method to induce these structures theory-agnostically. Instead of modeling syntactic relations as defined by annotation schemata, we model a more general property implicit in the definition of dependency relations, syntactic substitutability. This property captures the fact that words at either end of a dependency can be substituted with words from the same category. Substitutions can be used to generate a set of syntactically invariant sentences whose representations are then used for parsing. We show that increasing the number of substitutions used improves parsing accuracy on natural data. On long-distance subject-verb agreement constructions, our method achieves 79.5% recall compared to 8.9% using a previous method. Our method also provides improvements when transferred to a different parsing setup, demonstrating that it generalizes.
The StatCan Dialogue Dataset: Retrieving Data Tables through Conversations with Genuine Intents
We introduce the StatCan Dialogue Dataset consisting of 19,379 conversation turns between agents working at Statistics Canada and online use… (see more)rs looking for published data tables. The conversations stem from genuine intents, are held in English or French, and lead to agents retrieving one of over 5000 complex data tables. Based on this dataset, we propose two tasks: (1) automatic retrieval of relevant tables based on a on-going conversation, and (2) automatic generation of appropriate agent responses at each turn. We investigate the difficulty of each task by establishing strong baselines. Our experiments on a temporal data split reveal that all models struggle to generalize to future conversations, as we observe a significant drop in performance across both tasks when we move from the validation to the test set. In addition, we find that response generation models struggle to decide when to return a table. Considering that the tasks pose significant challenges to existing models, we encourage the community to develop models for our task, which can be directly used to help knowledge workers find relevant tables for live chat users.
FaithDial: A Faithful Benchmark for Information-Seeking Dialogue
Ehsan Kamalloo
Osmar Zaiane
Mo Yu
Edoardo M. Ponti
The goal of information-seeking dialogue is to respond to seeker queries with natural language utterances that are grounded on knowledge sou… (see more)rces. However, dialogue systems often produce unsupported utterances, a phenomenon known as hallucination. To mitigate this behavior, we adopt a data-centric solution and create FaithDial, a new benchmark for hallucination-free dialogues, by editing hallucinated responses in the Wizard of Wikipedia (WoW) benchmark. We observe that FaithDial is more faithful than WoW while also maintaining engaging conversations. We show that FaithDial can serve as training signal for: i) a hallucination critic, which discriminates whether an utterance is faithful or not, and boosts the performance by 12.8 F1 score on the BEGIN benchmark compared to existing datasets for dialogue coherence; ii) high-quality dialogue generation. We benchmark a series of state-of-the-art models and propose an auxiliary contrastive objective that achieves the highest level of faithfulness and abstractiveness based on several automated metrics. Further, we find that the benefits of FaithDial generalize to zero-shot transfer on other datasets, such as CMU-Dog and TopicalChat. Finally, human evaluation reveals that responses generated by models trained on FaithDial are perceived as more interpretable, cooperative, and engaging.
Post-hoc Interpretability for Neural NLP: A Survey
Evaluating the Faithfulness of Importance Measures in NLP by Recursively Masking Allegedly Important Tokens and Retraining
To explain NLP models a popular approach is to use importance measures, such as attention, which inform input tokens are important for makin… (see more)g a prediction. However, an open question is how well these explanations accurately reflect a model's logic, a property called faithfulness. To answer this question, we propose Recursive ROAR, a new faithfulness metric. This works by recursively masking allegedly important tokens and then retraining the model. The principle is that this should result in worse model performance compared to masking random tokens. The result is a performance curve given a masking-ratio. Furthermore, we propose a summarizing metric using relative area-between-curves (RACU), which allows for easy comparison across papers, models, and tasks. We evaluate 4 different importance measures on 8 different datasets, using both LSTM-attention models and RoBERTa models. We find that the faithfulness of importance measures is both model-dependent and task-dependent. This conclusion contradicts previous evaluations in both computer vision and faithfulness of attention literature.
Does Entity Abstraction Help Generative Transformers Reason?
Nicolas Gontier
Christopher Pal
We study the utility of incorporating entity type abstractions into pre-trained Transformers and test these methods on four NLP tasks requir… (see more)ing different forms of logical reasoning: (1) compositional language understanding with text-based relational reasoning (CLUTRR), (2) abductive reasoning (ProofWriter), (3) multi-hop question answering (HotpotQA), and (4) conversational question answering (CoQA). We propose and empirically explore three ways to add such abstraction: (i) as additional input embeddings, (ii) as a separate sequence to encode, and (iii) as an auxiliary prediction task for the model. Overall, our analysis demonstrates that models with abstract entity knowledge performs better than without it. The best abstraction aware models achieved an overall accuracy of 88.8% and 91.8% compared to the baseline model achieving 62.9% and 89.8% on CLUTRR and ProofWriter respectively. However, for HotpotQA and CoQA, we find that F1 scores improve by only 0.5% on average. Our results suggest that the benefit of explicit abstraction is significant in formally defined logical reasoning settings requiring many reasoning hops, but point to the notion that it is less beneficial for NLP tasks having less formal logical structure.
IGLUE: A Benchmark for Transfer Learning across Modalities, Tasks, and Languages
Emanuele Bugliarello
Fangyu Liu
Jonas Pfeiffer
Reliable evaluation benchmarks designed for replicability and comprehensiveness have driven progress in machine learning. Due to the lack of… (see more) a multilingual benchmark, however, vision-and-language research has mostly focused on English language tasks. To fill this gap, we introduce the Image-Grounded Language Understanding Evaluation benchmark. IGLUE brings together - by both aggregating pre-existing datasets and creating new ones - visual question answering, cross-modal retrieval, grounded reasoning, and grounded entailment tasks across 20 diverse languages. Our benchmark enables the evaluation of multilingual multimodal models for transfer learning, not only in a zero-shot setting, but also in newly defined few-shot learning setups. Based on the evaluation of the available state-of-the-art models, we find that translate-test transfer is superior to zero-shot transfer and that few-shot learning is hard to harness for many tasks. Moreover, downstream performance is partially explained by the amount of available unlabelled textual data for pretraining, and only weakly by the typological distance of target-source languages. We hope to encourage future research efforts in this area by releasing the benchmark to the community.