Publications

An Empirical Survey of the Effectiveness of Debiasing Techniques for Pre-trained Language Models
Recent work has shown pre-trained language models capture social biases from the large amounts of text they are trained on. This has attract… (see more)ed attention to developing techniques that mitigate such biases. In this work, we perform an empirical survey of five recently proposed bias mitigation techniques: Counterfactual Data Augmentation (CDA), Dropout, Iterative Nullspace Projection, Self-Debias, and SentenceDebias. We quantify the effectiveness of each technique using three intrinsic bias benchmarks while also measuring the impact of these techniques on a model’s language modeling ability, as well as its performance on downstream NLU tasks. We experimentally find that: (1) Self-Debias is the strongest debiasing technique, obtaining improved scores on all bias benchmarks; (2) Current debiasing techniques perform less consistently when mitigating non-gender biases; And (3) improvements on bias benchmarks such as StereoSet and CrowS-Pairs by using debiasing strategies are often accompanied by a decrease in language modeling ability, making it difficult to determine whether the bias mitigation was effective.
Hallucinated but Factual! Inspecting the Factuality of Hallucinations in Abstractive Summarization
Meng Cao
Jackie Chi Kit Cheung
State-of-the-art abstractive summarization systems often generate hallucinations; i.e., content that is not directly inferable from the sour… (see more)ce text. Despite being assumed to be incorrect, we find that much hallucinated content is actually consistent with world knowledge, which we call factual hallucinations. Including these factual hallucinations in a summary can be beneficial because they provide useful background information. In this work, we propose a novel detection approach that separates factual from non-factual hallucinations of entities. Our method is based on an entity’s prior and posterior probabilities according to pre-trained and finetuned masked language models, respectively. Empirical results suggest that our method vastly outperforms two baselines in both accuracy and F1 scores and has a strong correlation with human judgments on factuality classification tasks.Furthermore, we use our method as a reward signal to train a summarization system using an off-line reinforcement learning (RL) algorithm that can significantly improve the factuality of generated summaries while maintaining the level of abstractiveness.
Identification of out-of-distribution cases of CNN using class-based surprise adequacy
Mira Marhaba
Ettore Merlo
Giuliano Antoniol
Machine learning is vulnerable to possible incorrect classification of cases that are out of the distribution observed during training and c… (see more)alibration
Image Retrieval from Contextual Descriptions
Vibhav Vineet
Edoardo Ponti
The ability to integrate context, including perceptual and temporal cues, plays a pivotal role in grounding the meaning of a linguistic utte… (see more)rance. In order to measure to what extent current vision-and-language models master this ability, we devise a new multimodal challenge, Image Retrieval from Contextual Descriptions (ImageCoDe). In particular, models are tasked with retrieving the correct image from a set of 10 minimally contrastive candidates based on a contextual description.As such, each description contains only the details that help distinguish between images.Because of this, descriptions tend to be complex in terms of syntax and discourse and require drawing pragmatic inferences. Images are sourced from both static pictures and video frames.We benchmark several state-of-the-art models, including both cross-encoders such as ViLBERT and bi-encoders such as CLIP, on ImageCoDe.Our results reveal that these models dramatically lag behind human performance: the best variant achieves an accuracy of 20.9 on video frames and 59.4 on static pictures, compared with 90.8 in humans.Furthermore, we experiment with new model variants that are better equipped to incorporate visual and temporal context into their representations, which achieve modest gains. Our hope is that ImageCoDE will foster progress in grounded language understanding by encouraging models to focus on fine-grained visual differences.
Local Structure Matters Most: Perturbation Study in NLU
Recent research analyzing the sensitivity of natural language understanding models to word-order perturbations has shown that neural models … (see more)are surprisingly insensitive to the order of words. In this paper, we investigate this phenomenon by developing order-altering perturbations on the order of words, subwords, and characters to analyze their effect on neural models' performance on language understanding tasks. We experiment with measuring the impact of perturbations to the local neighborhood of characters and global position of characters in the perturbed texts and observe that perturbation functions found in prior literature only affect the global ordering while the local ordering remains relatively unperturbed. We empirically show that neural models, invariant of their inductive biases, pretraining scheme, or the choice of tokenization, mostly rely on the local structure of text to build understanding and make limited use of the global structure.
Moving shared decision-making forward in Iran
Nam Nguyen
Mahasti Alizadeh
P397. Genomic Deletions and Duplications Show Mirror Effects on Cognitive Ability According to Spatial Patterns of Gene Expression in the Human Brain
Kuldeep Kumar
Sayeh Kazem
Elise Douard
Zohra Saci
Laura Almasy
David Glahn
Sébastien Jacquemont
Performance-gated deliberation: A context-adapted strategy in which urgency is opportunity cost
Finding the right amount of deliberation, between insufficient and excessive, is a hard decision making problem that depends on the value we… (see more) place on our time. Average-reward, putatively encoded by tonic dopamine, serves in existing reinforcement learning theory as the opportunity cost of time, including deliberation time. Importantly, this cost can itself vary with the environmental context and is not trivial to estimate. Here, we propose how the opportunity cost of deliberation can be estimated adaptively on multiple timescales to account for non-stationary contextual factors. We use it in a simple decision-making heuristic based on average-reward reinforcement learning (AR-RL) that we call Performance-Gated Deliberation (PGD). We propose PGD as a strategy used by animals wherein deliberation cost is implemented directly as urgency, a previously characterized neural signal effectively controlling the speed of the decision-making process. We show PGD outperforms AR-RL solutions in explaining behaviour and urgency of non-human primates in a context-varying random walk prediction task and is consistent with relative performance and urgency in a context-varying random dot motion task. We make readily testable predictions for both neural activity and behaviour.
Predicting the probability distribution of bus travel time to measure the reliability of public transport services
Léa Ricard
Guy Desaulniers
Andrea Lodi
Louis-Martin Rousseau
Subgraph Retrieval Enhanced Model for Multi-hop Knowledge Base Question Answering
Jing Zhang
Xiaokang Zhang
Jifan Yu
Jie Tang
Cuiping Li
Hong Chen
Recent works on knowledge base question answering (KBQA) retrieve subgraphs for easier reasoning. A desired subgraph is crucial as a small o… (see more)ne may exclude the answer but a large one might introduce more noises. However, the existing retrieval is either heuristic or interwoven with the reasoning, causing reasoning on the partial subgraphs, which increases the reasoning bias when the intermediate supervision is missing. This paper proposes a trainable subgraph retriever (SR) decoupled from the subsequent reasoning process, which enables a plug-and-play framework to enhance any subgraph-oriented KBQA model. Extensive experiments demonstrate SR achieves significantly better retrieval and QA performance than existing retrieval methods. Via weakly supervised pre-training as well as the end-to-end fine-tuning, SRl achieves new state-of-the-art performance when combined with NSM, a subgraph-oriented reasoner, for embedding-based KBQA methods.
On the estimation of discrete choice models to capture irrational customer behaviors
Sanjay Dominik Jena
Andrea Lodi
Claudio Sole
The random utility maximization model is by far the most adopted framework to estimate consumer choice behavior. However, behavioral economi… (see more)cs has provided strong empirical evidence of irrational choice behaviors, such as halo effects, that are incompatible with this framework. Models belonging to the random utility maximization family may therefore not accurately capture such irrational behavior. Hence, more general choice models, overcoming such limitations, have been proposed. However, the flexibility of such models comes at the price of increased risk of overfitting. As such, estimating such models remains a challenge. In this work, we propose an estimation method for the recently proposed generalized stochastic preference choice model, which subsumes the family of random utility maximization models and is capable of capturing halo effects. In particular, we propose a column-generation method to gradually refine the discrete choice model based on partially ranked preference sequences. Extensive computational experiments indicate that our model, explicitly accounting for irrational preferences, can significantly boost the predictive accuracy on both synthetic and real-world data instances. Summary of Contribution: In this work, we propose an estimation method for the recently proposed generalized stochastic preference choice model, which subsumes the family of random utility maximization models and is capable of capturing halo effects. Specifically, we show how to use partially ranked preferences to efficiently model rational and irrational customer types from transaction data. Our estimation procedure is based on column generation, where relevant customer types are efficiently extracted by expanding a treelike data structure containing the customer behaviors. Furthermore, we propose a new dominance rule among customer types whose effect is to prioritize low orders of interactions among products. An extensive set of experiments assesses the predictive accuracy of the proposed approach by comparing it against rank-based methods with only rational preferences and with more general benchmarks from the literature. Our results show that accounting for irrational preferences can boost predictive accuracy by 12.5% on average when tested on a real-world data set from a large chain of grocery and drug stores.
The Power of Prompt Tuning for Low-Resource Semantic Parsing
Prompt tuning has recently emerged as an effective method for adapting pre-trained language models to a number of language understanding and… (see more) generation tasks. In this paper, we investigate prompt tuning for semantic parsing—the task of mapping natural language utterances onto formal meaning representations. On the low-resource splits of Overnight and TOPv2, we find that a prompt tuned T5-xl significantly outperforms its fine-tuned counterpart, as well as strong GPT-3 and BART baselines. We also conduct ablation studies across different model scales and target representations, finding that, with increasing model scale, prompt tuned T5 models improve at generating target representations that are far from the pre-training distribution.