Portrait of Pengfei Liu is unavailable

Pengfei Liu

Alumni

Publications

SPE: Symmetrical Prompt Enhancement for Factual Knowledge Retrieval
James M. Crawford
Matthew L. Ginsberg
Jacob Devlin
Ming-Wei Chang
Kenton Lee
Alex Graves
Abdel rahman Mohamed
Adi Haviv
Jonathan Berant
Amir Globerson
Chloe Kiddon
Pedro M. Domingos
Brian Lester
Rami Al-rfou'
Noah Constant. 2021
Weizhe Yuan … (see 6 more)
Jinlan Fu
Zhengbao Jiang
Xiao Liu
Yanan Zheng
Zhengxiao Du
Ming Ding
Pretrained language models (PLMs) have 001 been shown to accumulate factual knowledge 002 from their unsupervised pretraining proce-003 dure… (see more)s (Petroni et al., 2019). Prompting is an 004 effective way to query such knowledge from 005 PLMs. Recently, continuous prompt methods 006 have been shown to have a larger potential 007 than discrete prompt methods in generating ef-008 fective queries (Liu et al., 2021a). However, 009 these methods do not consider symmetry of 010 the task. In this work, we propose Symmet-011 rical Prompt Enhancement (SPE), a continu-012 ous prompt-based method for fact retrieval that 013 leverages the symmetry of the task. Our results 014 on LAMA, a popular fact retrieval dataset, 015 show significant improvement of SPE over pre-016 vious prompt methods
Contextualized Non-local Neural Networks for Sequence Learning
Shuaichen Chang
Xuanjing Huang
Recently, a large number of neural mechanisms and models have been proposed for sequence learning, of which selfattention, as exemplified by… (see more) the Transformer model, and graph neural networks (GNNs) have attracted much attention. In this paper, we propose an approach that combines and draws on the complementary strengths of these two methods. Specifically, we propose contextualized non-local neural networks (CN3), which can both dynamically construct a task-specific structure of a sentence and leverage rich local dependencies within a particular neighbourhood.Experimental results on ten NLP tasks in text classification, semantic matching, and sequence labelling show that our proposed model outperforms competitive baselines and discovers task-specific dependency structures, thus providing better interpretability to users.
Learning Multi-Task Communication with Message Passing for Sequence Learning
We present two architectures for multi-task learning with neural sequence models. Our approach allows the relationships between different ta… (see more)sks to be learned dynamically, rather than using an ad-hoc pre-defined structure as in previous work. We adopt the idea from message-passing graph neural networks, and propose a general graph multi-task learning framework in which different tasks can communicate with each other in an effective and interpretable way. We conduct extensive experiments in text classification and sequence labelling to evaluate our approach on multi-task learning and transfer learning. The empirical results show that our models not only outperform competitive baselines, but also learn interpretable and transferable patterns across tasks.
Multi-task Learning over Graph Structures
We present two architectures for multi-task learning with neural sequence models. Our approach allows the relationships between different ta… (see more)sks to be learned dynamically, rather than using an ad-hoc pre-defined structure as in previous work. We adopt the idea from message-passing graph neural networks and propose a general \textbf{graph multi-task learning} framework in which different tasks can communicate with each other in an effective and interpretable way. We conduct extensive experiments in text classification and sequence labeling to evaluate our approach on multi-task learning and transfer learning. The empirical results show that our models not only outperform competitive baselines but also learn interpretable and transferable patterns across tasks.