Portrait of Bang Liu

Bang Liu

Associate Academic Member
Canada CIFAR AI Chair
Assistant Professor, Université de Montréal, Department of Computer Science and Operations Research

Biography

Bang Liu is an assistant professor in the Department of Computer Science and Operations Research (DIRO), and a core member of the Applied Research in Computational Linguistics Lab (RALI) at Université de Montréal. He is also an associate academic member of Mila – Quebec Artificial Intelligence Institute and a Canada CIFAR AI Chair.

Liu received his BEng from the University of Science and Technology of China in 2013, and his MSc and PhD degrees from the University of Alberta in 2015 and 2020, respectively. His research interests lie primarily in the areas of natural language processing, multimodal and embodied learning, theory and techniques for AGI (e.g., understanding and improving large language models), and AI for science (e.g., health, material science, XR).

Current Students

Independent visiting researcher - Université de Montréal
PhD - Université de Montréal
Master's Research - Université de Montréal
PhD - Université de Montréal
Master's Research - Université de Montréal
Master's Research - Université de Montréal
PhD - Université de Montréal
PhD - Université de Montréal
Master's Research - Université de Montréal
PhD - Université de Montréal
PhD - Université de Montréal
Master's Research - Université de Montréal
PhD - Université de Montréal
Research Intern - Université de Montréal
Master's Research - Université de Montréal
Master's Research - Université de Montréal

Publications

GOAt: Explaining Graph Neural Networks via Graph Output Attribution
Shengyao Lu
Keith G Mills
Jiao He
Di Niu
Understanding the decision-making process of Graph Neural Networks (GNNs) is crucial to their interpretability. Most existing methods for ex… (see more)plaining GNNs typically rely on training auxiliary models, resulting in the explanations remain black-boxed. This paper introduces Graph Output Attribution (GOAt), a novel method to attribute graph outputs to input graph features, creating GNN explanations that are faithful, discriminative, as well as stable across similar samples. By expanding the GNN as a sum of scalar products involving node features, edge features and activation patterns, we propose an efficient analytical method to compute contribution of each node or edge feature to each scalar product and aggregate the contributions from all scalar products in the expansion form to derive the importance of each node and edge. Through extensive experiments on synthetic and real-world data, we show that our method not only outperforms various state-of-the-art GNN explainers in terms of the commonly used fidelity metric, but also exhibits stronger discriminability, and stability by a remarkable margin.
Efficient Classification of Long Documents via State-Space Models
Peng Lu
Suyuchen Wang
Mehdi Rezagholizadeh
Ivan Kobyzev
HoneyBee: Progressive Instruction Finetuning of Large Language Models for Materials Science
Yu Song
Santiago Miret
Huan Zhang
MAPO: Boosting Large Language Model Performance with Model-Adaptive Prompt Optimization
Yuyan Chen
Zhihao Wen
Ge Fan
Zhengyu Chen
Wei Wu
Dayiheng Liu
Zhixu Li
Yanghua Xiao
SkillQG: Learning to Generate Question for Reading Comprehension Assessment
Xiaoqiang Wang
Siliang Tang
Lingfei Wu
Fine-tuning Happens in Tiny Subspaces: Exploring Intrinsic Task-specific Subspaces of Pre-trained Language Models
Zhong Zhang
Junming Shao
MatSci-NLP: Evaluating Scientific Language Models on Materials Science Language Tasks Using Text-to-Schema Modeling
Yurun Song
Santiago Miret
QRelScore: Better Evaluating Generated Questions with Deeper Understanding of Context-aware Relevance
Xiaoqiang Wang
Siliang Tang
Lingfei Wu
Existing metrics for assessing question generation not only require costly human reference but also fail to take into account the input cont… (see more)ext of generation, rendering the lack of deep understanding of the relevance between the generated questions and input contexts. As a result, they may wrongly penalize a legitimate and reasonable candidate question when it (1) involves complicated reasoning with the context or (2) can be grounded by multiple evidences in the context.In this paper, we propose QRelScore, a context-aware Relevance evaluation metric for Question Generation.Based on off-the-shelf language models such as BERT and GPT2, QRelScore employs both word-level hierarchical matching and sentence-level prompt-based generation to cope with the complicated reasoning and diverse generation from multiple evidences, respectively.Compared with existing metrics, our experiments demonstrate that QRelScore is able to achieve a higher correlation with human judgments while being much more robust to adversarial samples.
Better Modeling the Programming World with Code Concept Graphs-augmented Multi-modal Learning
Martin Weyssow
Houari Sahraoui
The progress made in code modeling has been tremendous in recent years thanks to the design of natural language processing learning approach… (see more)es based on state-of-the-art model architectures. Nevertheless, we believe that the current state-of-the-art does not focus enough on the full potential that data may bring to a learning process in software engineering. Our vision articulates on the idea of leveraging multi-modal learning approaches to modeling the programming world. In this paper, we investigate one of the underlying idea of our vision whose objective based on concept graphs of identifiers aims at leveraging high-level relationships between domain concepts manipulated through particular language constructs. In particular, we propose to enhance an existing pretrained language model of code by joint-learning it with a graph neural network based on our concept graphs. We conducted a preliminary evaluation that shows gain of effectiveness of the models for code search using a simple joint-learning method and prompts us to further investigate our research vision.
Grow-and-Clip: Informative-yet-Concise Evidence Distillation for Answer Explanation
Yuyan Chen
Yanghua Xiao
Interpreting the predictions of existing Question Answering (QA) models is critical to many real-world intelligent applications, such as QA … (see more)systems for healthcare, education, and finance. However, existing QA models lack interpretability and provide no feedback or explanation for end-users to help them understand why a specific prediction is the answer to a question. In this research, we argue that the evidences of an answer is critical to enhancing the interpretability of QA models. Unlike previous research that simply extracts several sentence(s) in the context as evidence, we are the first to explicitly define the concept of evidence as the supporting facts in a context which are informative, concise, and readable. Besides, we provide effective strategies to quantitatively measure the informativeness, conciseness and readability of evidence. Furthermore, we propose Grow-and-Clip Evidence Distillation (GCED) algorithm to extract evidences from the contexts by trade-off informativeness, conciseness, and readability. We conduct extensive experiments on the SQuAD and TriviaQA datasets with several baseline models to evaluate the effect of GCED on interpreting answers to questions. Human evaluation are also carried out to check the quality of distilled evidences. Experimental results show that automatic distilled evidences have human-like informativeness, conciseness and readability, which can enhance the interpretability of the answers to questions.
Tell Me How to Survey: Literature Review Made Simple with Automatic Reading Path Generation
Jiayuan Ding
Tong Xiang
Zijing Ou
Wangyang Zuo
Ruihui Zhao
Chenhua Lin
Yefeng Zheng
Recent years have witnessed the dramatic growth of paper volumes with plenty of new research papers published every day, especially in the a… (see more)rea of computer science. How to glean papers worth reading from the massive literature to do a quick survey or keep up with the latest advancement about a specific research topic has become a challenging task. Existing academic search engines return relevant papers by individually calculating the relevance between each paper and query. However, such systems usually omit the prerequisite chains of a research topic and cannot form a meaningful reading path. In this paper, we introduce a new task named Reading Path Generation (RPG) which aims at automatically producing a path of papers to read for a given query. To serve as a research benchmark, we further propose SurveyBank, a dataset consisting of large quantities of survey papers in the field of computer science as well as their citation relationships. Furthermore, we propose a graph-optimization-based approach for reading path generation which takes the relationship between papers into account. Extensive evaluations demonstrate that our approach outperforms other baselines. A real-time Reading Path Generation (RePaGer) system has been also implemented with our designed model. Our source code and SurveyBank dataset can be found here11https://github.com/JiayuanDing100/Reading-Path-Generation.
Accepted Tutorials at The Web Conference 2022
Riccardo Tommasini
Senjuti Basu Roy
Xuan Wang
Hongwei Wang
Heng Ji
Jiawei Han
Preslav Nakov
Giovanni Da San Martino
Firoj Alam
Markus Schedl
Elisabeth Lex
Akash Bharadwaj
Graham Cormode
Milan Dojchinovski
Jan Forberg
Johannes Frey
Pieter Bonte
Marco Balduini
Matteo Belcao
Emanuele Della Valle … (see 53 more)
Junliang Yu
Hongzhi Yin
Tong Chen
Haochen Liu
Yiqi Wang
Wenqi Fan
Xiaorui Liu
Jamell Dacon
Lingjuan Lye
Jiliang Tang
Aristides Gionis
Stefan Neumann
Bruno Ordozgoiti
Simon Razniewski
Hiba Arnaout
Shrestha Ghosh
Fabian Suchanek
Lingfei Wu
Yu Chen
Yunyao Li
Filip Ilievski
Daniel Garijo
Hans Chalupsky
Pedro Szekely
Ilias Kanellos
Dimitris Sacharidis
Thanasis Vergoulis
Nurendra Choudhary
Nikhil Rao
Karthik Subbian
Srinivasan Sengamedu
Chandan K. Reddy
Friedhelm Victor
Bernhard Haslhofer
George Katsogiannis- Meimarakis
Georgia Koutrika
Shengmin Jin
Danai Koutra
Reza Zafarani
Yulia Tsvetkov
Vidhisha Balachandran
Sachin Kumar
Xiangyu Zhao
Bo Chen
Huifeng Guo
Yejing Wang
Ruiming Tang
Yang Zhang
Wenjie Wang
Peng Wu
Fuli Feng
Xiangnan He
This paper summarizes the content of the 20 tutorials that have been given at The Web Conference 2022: 85% of these tutorials are lecture st… (see more)yle, and 15% of these are hands on.