2020-11
Universal Dependencies 2.7
MeDAL: Medical Abbreviation Disambiguation Dataset for Natural Language Understanding Pretraining
Learning Improvised Chatbots from Adversarial Modifications of Natural Language Feedback
2020-10
Explicitly Modeling Syntax in Language Model improves Generalization
2020-07
Words aren’t enough, their order matters: On the Robustness of Grounding Visual Referring Expressions
2020-05
Universal Dependencies 2.6
2020-04
StereoSet: Measuring stereotypical bias in pretrained language models
2020-01
You could have said that instead: Improving Chatbots with Natural Language Feedback.
2019-11
Universal Dependencies 2.5
Publications collected and formatted using Paperoni