Publications

Minimax and Neyman–Pearson Meta-Learning for Outlier Languages
Edoardo Ponti
Rahul Aralikatte
Disha Shrivastava
Anders Sogaard
Model-agnostic meta-learning (MAML) has been recently put forth as a strategy to learn resource-poor languages in a sample-efficient fashion… (voir plus). Nevertheless, the properties of these languages are often not well represented by those available during training. Hence, we argue that the i.i.d. assumption ingrained in MAML makes it ill-suited for cross-lingual NLP. In fact, under a decision-theoretic framework, MAML can be interpreted as minimising the expected risk across training languages (with a uniform prior), which is known as Bayes criterion. To increase its robustness to outlier languages, we create two variants of MAML based on alternative criteria: Minimax MAML reduces the maximum risk across languages, while Neyman–Pearson MAML constrains the risk in each language to a maximum threshold. Both criteria constitute fully differentiable two-player games. In light of this, we propose a new adaptive optimiser solving for a local approximation to their Nash equilibrium. We evaluate both model variants on two popular NLP tasks, part-of-speech tagging and question answering. We report gains for their average and minimum performance across low-resource languages in zeroand few-shot settings, compared to joint multisource transfer and vanilla MAML. The code for our experiments is available at https:// github.com/rahular/robust-maml.
On-the-Fly Attention Modulation for Neural Generation
Yue Dong
Chandra Bhagavatula
Ximing Lu
Jena D. Hwang
Antoine Bosselut
Yejin Choi
Despite considerable advancements with deep neural language models (LMs), neural text generation still suffers from degeneration: the genera… (voir plus)ted text is repetitive, generic, self-contradictory, and often lacks commonsense. Our analyses on sentence-level attention patterns in LMs reveal that neural degeneration may be associated with insufficient learning of task-specific characteristics by the attention mechanism. This finding motivates on-the-fly attention modulation -- a simple but effective method that enables the injection of priors into attention computation during inference. Automatic and human evaluation results on three text generation benchmarks demonstrate that attention modulation helps LMs generate text with enhanced fluency, creativity, and commonsense reasoning, in addition to significantly reduce sentence-level repetition.
Optimizing Deeper Transformers on Small Datasets
Peng Xu
Dhruv Kumar
Wei Yang
Wenjie Zi
Keyi Tang
Chenyang Huang
S. Prince
Yanshuai Cao
It is a common belief that training deep transformers from scratch requires large datasets. Consequently, for small datasets, people usually… (voir plus) use shallow and simple additional layers on top of pre-trained models during fine-tuning. This work shows that this does not always need to be the case: with proper initialization and optimization, the benefits of very deep transformers can carry over to challenging tasks with small datasets, including Text-to-SQL semantic parsing and logical reading comprehension. In particular, we successfully train 48 layers of transformers, comprising 24 fine-tuned layers from pre-trained RoBERTa and 24 relation-aware layers trained from scratch. With fewer training steps and no task-specific pre-training, we obtain the state of the art performance on the challenging cross-domain Text-to-SQL parsing benchmark Spider. We achieve this by deriving a novel Data dependent Transformer Fixed-update initialization scheme (DT-Fixup), inspired by the prior T-Fixup work. Further error analysis shows that increasing depth can help improve generalization on small datasets for hard cases that require reasoning and structural understanding.
Semantic and Syntactic Enhanced Aspect Sentiment Triplet Extraction
Zhexue Chen
Hong Huang
Xuanhua Feng Shi
Hai-nan Jin
Aspect Sentiment Triplet Extraction (ASTE) aims to extract triplets from sentences, where each triplet includes an entity, its associated se… (voir plus)ntiment, and the opinion span explaining the reason for the sentiment. Most existing research addresses this problem in a multi-stage pipeline manner, which neglects the mutual information between such three elements and has the problem of error propagation. In this paper, we propose a Semantic and Syntactic Enhanced aspect Sentiment triplet Extraction model (S3E2) to fully exploit the syntactic and semantic relationships between the triplet elements and jointly extract them. Specifically, we design a Graph-Sequence duel representation and modeling paradigm for the task of ASTE: we represent the semantic and syntactic relationships between word pairs in a sentence by graph and encode it by Graph Neural Networks (GNNs), as well as modeling the original sentence by LSTM to preserve the sequential information. Under this setting, we further apply a more efficient inference strategy for the extraction of triplets. Extensive evaluations on four benchmark datasets show that S3E2 significantly outperforms existing approaches, which proves our S3E2's superiority and flexibility in an end-to-end fashion.
StereoSet: Measuring stereotypical bias in pretrained language models
Moin Nadeem
Anna Bethke
A stereotype is an over-generalized belief about a particular group of people, e.g., Asians are good at math or African Americans are athlet… (voir plus)ic. Such beliefs (biases) are known to hurt target groups. Since pretrained language models are trained on large real-world data, they are known to capture stereotypical biases. It is important to quantify to what extent these biases are present in them. Although this is a rapidly growing area of research, existing literature lacks in two important aspects: 1) they mainly evaluate bias of pretrained language models on a small set of artificial sentences, even though these models are trained on natural data 2) current evaluations focus on measuring bias without considering the language modeling ability of a model, which could lead to misleading trust on a model even if it is a poor language model. We address both these problems. We present StereoSet, a large-scale natural English dataset to measure stereotypical biases in four domains: gender, profession, race, and religion. We contrast both stereotypical bias and language modeling ability of popular models like BERT, GPT-2, RoBERTa, and XLnet. We show that these models exhibit strong stereotypical biases. Our data and code are available at https://stereoset.mit.edu.
Supervised multi-specialist topic model with applications on large-scale electronic health record data
Ziyang Song
Xavier Sumba Toral
Yixin Xu
Aihua Liu
Liming Guo
Guido Powell
Aman Verma
Ariane Marelli
Motivation: Electronic health record (EHR) data provides a new venue to elucidate disease comorbidities and latent phenotypes for precision … (voir plus)medicine. To fully exploit its potential, a realistic data generative process of the EHR data needs to be modelled. Materials and Methods: We present MixEHR-S to jointly infer specialist-disease topics from the EHR data. As the key contribution, we model the specialist assignments and ICD-coded diagnoses as the latent topics based on patient's underlying disease topic mixture in a novel unified supervised hierarchical Bayesian topic model. For efficient inference, we developed a closed-form collapsed variational inference algorithm to learn the model distributions of MixEHR-S. Results: We applied MixEHR-S to two independent large-scale EHR databases in Quebec with three targeted applications: (1) Congenital Heart Disease (CHD) diagnostic prediction among 154,775 patients; (2) Chronic obstructive pulmonary disease (COPD) diagnostic prediction among 73,791 patients; (3) future insulin treatment prediction among 78,712 patients diagnosed with diabetes as a mean to assess the disease exacerbation. In all three applications, MixEHR-S conferred clinically meaningful latent topics among the most predictive latent topics and achieved superior target prediction accuracy compared to the existing methods, providing opportunities for prioritizing high-risk patients for healthcare services. Availability and implementation: MixEHR-S source code and scripts of the experiments are freely available at https://github.com/li-lab-mcgill/mixehrS
A systematic analysis of ICSD-3 diagnostic criteria and proposal for further structured iteration.
Christophe Gauld
Régis Lopez
Pierre A. GEOFFROY
Charles Morin
Kelly Guichard
Elodie Giroux
Yves Dauvilliers
Pierre Philip
Jean‐Arthur Micoulaud‐Franchi
Temporal Profiles of Social Attention Are Different Across Development in Autistic and Neurotypical People.
Teresa Del Bianco
Luke Mason
Tony Charman
Julianne Tillman
Eva Loth
Hannah Hayward
F. Shic
Jan K. Buitelaar
Mark Johnson
Emily J. H. Jones
Jumana Ahmad
Sara Ambrosino
Tobias Banaschewski
Simon Baron-Cohen
Sarah Baumeister
Christian Beckmann
Sven Bölte
Thomas Bourgeron
Carsten Bours
M. Brammer … (voir 46 de plus)
Daniel Brandeis
Claudia Brogna
Yvette de Bruijn
Ineke Cornelissen
Daisy Crawley
Flavio Dell’Acqua
Sarah Durston
Christine Ecker
Jessica Faulkner
Vincent Frouin
Pilar Garcés
David Goyard
Lindsay Ham
Joerg F. Hipp
Rosemary Holt
Meng-Chuan Lai
Xavier Liogier D’ardhuy
Michael V. Lombardo
David J. Lythgoe
René Mandl
Andre Marquand
Maarten Mennes
Andreas Meyer-Lindenberg
Carolin Moessnang
Nico Mueller
Declan Murphy
Beth Oakley
Laurence O’Dwyer
Marianne Oldehinkel
Bob Oranje
Gahan Pandina
Antonio Persico
Barbara Ruggeri
Amber N. V. Ruigrok
Jessica Sabet
Roberto Sacco
Antonia San José Cáceres
Emily Simonoff
Will Spooren
Roberto Toro
Heike Tost
Jack Waldman
Steve C. R. Williams
Caroline Wooldridge
Marcel P. Zwiers
Why do sleep disorders belong to mental disorder classifications? A network analysis of the "Sleep-Wake Disorders" section of the DSM-5.
Christophe Gauld
Régis Lopez
Charles Morin
Julien Maquet
Aileen McGonigal
Pierre A. GEOFFROY
Eric Fakra
Pierre Philip
Jean‐Arthur Micoulaud‐Franchi
Human brain anatomy reflects separable genetic and environmental components of socioeconomic status
H. Kweon
Gökhan Aydogan
Alain Dagher
C. Ruff
Gideon Nave
Martha J Farah
Philipp Koellinger
Recent studies report that socioeconomic status (SES) correlates with brain structure. Yet, such findings are variable and little is known a… (voir plus)bout underlying causes. We present a well-powered voxel-based analysis of grey matter volume (GMV) across levels of SES, finding many small SES effects widely distributed across the brain, including cortical, subcortical and cerebellar regions. We also construct a polygenic index of SES to control for the additive effects of common genetic variation related to SES, which attenuates observed SES-GMV relations, to different degrees in different areas. Remaining variance, which may be attributable to environmental factors, is substantially accounted for by body mass index, a marker for lifestyle related to SES. In sum, SES affects multiple brain regions through measurable genetic and environmental effects. One-sentence Summary Socioeconomic status is linked with brain anatomy through a varying balance of genetic and environmental influences.
Local Structure Matters Most: Perturbation Study in NLU
Louis Clouâtre
Prasanna Parthasarathi
Recent research analyzing the sensitivity of natural language understanding models to word-order perturbations has shown that neural models … (voir plus)are surprisingly insensitive to the order of words.In this paper, we investigate this phenomenon by developing order-altering perturbations on the order of words, subwords, and characters to analyze their effect on neural models’ performance on language understanding tasks.We experiment with measuring the impact of perturbations to the local neighborhood of characters and global position of characters in the perturbed texts and observe that perturbation functions found in prior literature only affect the global ordering while the local ordering remains relatively unperturbed.We empirically show that neural models, invariant of their inductive biases, pretraining scheme, or the choice of tokenization, mostly rely on the local structure of text to build understanding and make limited use of the global structure.
Clones in deep learning code: what, where, and why?
Hadhemi Jebnoun
Md. Saidur Rahman
Biruk Asmare Muse