2021-12
End-to-End Training of Multi-Document Reader and Retriever for Open-Domain Question Answering
2021-11
Universal Dependencies 2.9
Mind the Context: The Impact of Contextualization in Neural Module Networks for Grounding Visual Referring Expressions
Visually Grounded Reasoning across Languages and Cultures
2021-10
The Power of Prompt Tuning for Low-Resource Semantic Parsing.
An Empirical Survey of the Effectiveness of Debiasing Techniques for Pre-Trained Language Models.
Evaluating the Faithfulness of Importance Measures in NLP by Recursively Masking Allegedly Important Tokens and Retraining
Compositional Generalization in Dependency Parsing
TopiOCQA: Open-domain Conversational Question Answeringwith Topic Switching.
2021-08
Post-hoc Interpretability for Neural NLP: A Survey.
Minimax and Neyman-Pearson Meta-Learning for Outlier Languages
StereoSet: Measuring stereotypical bias in pretrained language models
2021-07
Modelling Latent Translations for Cross-Lingual Transfer
2021-06
Abg-CoQA: Clarifying Ambiguity in Conversational Question Answering
End-to-End Training of Multi-Document Reader and Retriever for Open-Domain Question Answering
2021-05
Universal Dependencies 2.8.1
Explicitly Modeling Syntax in Language Models with Incremental Parsing and a Dynamic Oracle.
Understanding by Understanding Not: Modeling Negation in Language Models.
2021-04
Back-Training excels Self-Training at Unsupervised Domain Adaptation of Question Generation and Passage Retrieval
2021-01
Latent Translation Cross-Lingual Transfer
(venue unknown)
(2021-01-01)
2020-11
Universal Dependencies 2.7
MeDAL: Medical Abbreviation Disambiguation Dataset for Natural Language Understanding Pretraining
EMNLP 2020
(2020-11-01)
virtual.2020.emnlp.org[LATEST on arXiv preprint arXiv:2012.13978 (2020-12-27)]Learning Improvised Chatbots from Adversarial Modifications of Natural Language Feedback
2020-10
Explicitly Modeling Syntax in Language Model improves Generalization.
2020-09
Measuring Systematic Generalization in Neural Proof Generation with Transformers
2020-05
Universal Dependencies 2.6
Words aren’t enough, their order matters: on the robustness of grounding visual referring expressions
2020-01
You could have said that instead: Improving Chatbots with Natural Language Feedback.
2019-11
Universal Dependencies 2.5
Publications collected and formatted using Paperoni