Publications

SWEET - Weakly Supervised Person Name Extraction for Fighting Human Trafficking
Javin Liu
Hao Yu
Vidya Sujaya
Pratheeksha Nair
Kellin Pelrine
In this work, we propose a weak supervision pipeline SWEET: Supervise Weakly for Entity Extraction to fight Trafficking for extracting perso… (see more)n names from noisy escort advertisements. Our method combines the simplicity of rule-matching (through antirules, i.e., negated rules) and the generalizability of large language models fine-tuned on benchmark, domain-specific and synthetic datasets, treating them as weak labels. One of the major challenges in this domain is limited labeled data. SWEET addresses this by obtaining multiple weak labels through labeling functions and effectively aggregating them. SWEET outperforms the previous supervised SOTA method for this task by 9% F1 score on domain data and better generalizes to common benchmark datasets. Furthermore, we also release HTGEN, a synthetically generated dataset of escort advertisements (built using ChatGPT) to facilitate further research within the community.
Systematic Generalization by Finetuning? Analyzing Pretrained Language Models Using Constituency Tests
Aishik Chakraborty
Jackie Ck Cheung
Constituents are groups of words that behave as a syntactic unit. Many linguistic phenomena (e.g., question formation, diathesis alternation… (see more)s) require the manipulation and rearrangement of constituents in a sentence. In this paper, we investigate how different finetuning setups affect the ability of pretrained sequence-to-sequence language models such as BART and T5 to replicate constituency tests — transformations that involve manipulating constituents in a sentence. We design multiple evaluation settings by varying the combinations of constituency tests and sentence types that a model is exposed to during finetuning. We show that models can replicate a linguistic transformation on a specific type of sentence that they saw during finetuning, but performance degrades substantially in other settings, showing a lack of systematic generalization. These results suggest that models often learn to manipulate sentences at a surface level unrelated to the constituent-level syntactic structure, for example by copying the first word of a sentence. These results may partially explain the brittleness of pretrained language models in downstream tasks.
Technological Solutions to Online Toxicity: Potential and Pitfalls
Arezo Bodaghi
Ketra A. Schmitt
Social media platforms present a perplexing duality, acting at once as sites to build community and a sense of belonging, while also giving … (see more)rise to misinformation, facilitating and intensifying disinformation campaigns and perpetuating existing patterns of discrimination from the physical world. The first-step platforms take in mitigating the harmful side of social media involves identifying and managing toxic content. Users produce an enormous volume of posts which must be evaluated very quickly. This is an application context that requires machine-learning (ML) tools, but as we detail in this article, ML approaches rely on human annotators, analysts, and moderators. Our review of existing methods and potential improvements indicates that neither humans nor ML can be removed from this process in the near future. However, we see room for improvement in the working conditions of these human workers.
Toward Stronger Textual Attack Detectors
Pierre Colombo
Marine Picot
Nathan Noiry
Guillaume Staerman
The landscape of available textual adversarial attacks keeps growing, posing severe threats and raising concerns regarding the deep NLP syst… (see more)em's integrity. However, the crucial problem of defending against malicious attacks has only drawn the attention of the NLP community. The latter is nonetheless instrumental in developing robust and trustworthy systems. This paper makes two important contributions in this line of search: (i) we introduce LAROUSSE, a new framework to detect textual adversarial attacks and (ii) we introduce STAKEOUT, a new benchmark composed of nine popular attack methods, three datasets, and two pre-trained models. LAROUSSE is ready-to-use in production as it is unsupervised, hyperparameter-free, and non-differentiable, protecting it against gradient-based methods. Our new benchmark STAKEOUT allows for a robust evaluation framework: we conduct extensive numerical experiments which demonstrate that LAROUSSE outperforms previous methods, and which allows to identify interesting factors of detection rate variations.
Twins with psychiatric features and a nonsense HRAS variant affecting transcript processing
Andrea Accogli
Meagan L. Collins Hutchinson
Eric Krochmalnek
Judith St-Onge
Nassima Boudrahem-Addour
Jean-Baptiste Rivière
Ridha Joober
Myriam Srour
Validation of an AI-assisted Treatment Outcome Measure for Gender-Affirming Voice Care: Comparing AI Accuracy to Listener's Perception of Voice Femininity.
Shane Simon
Einav N. Silverstein
Lauren Timmons-Sund
Jeremy M. Pinto
Eugenia M. Castro
Karla D. O'dell
Michael M. Johns III
Wendy J. Mack
Yael Bensoussan
Validation of an AI-assisted Treatment Outcome Measure for Gender-Affirming Voice Care: Comparing AI Accuracy to Listener's Perception of Voice Femininity.
Shane Simon
Einav Silverstein
Lauren Timmons-Sund
Jeremy Pinto
M. Eugenia Castro
Karla O’Dell
Michael M. Johns III
Wendy J. Mack
Yael Bensoussan
XTREME-UP: A User-Centric Scarce-Data Benchmark for Under-Represented Languages
Sebastian Ruder
Jonathan H. Clark
Alexander Gutkin
Mihir Kale
Min Ma
Massimo Nicosia
Shruti Rijhwani
Parker Riley
Jean Michel Amath Sarr
Xinyi Wang
John Frederick Wieting
Nitish Gupta
Anna Katanova
Christo Kirov
Dana L Dickinson
Brian Roark
Bidisha Samanta
Connie Tao
Vera Axelrod … (see 7 more)
Isaac Rayburn Caswell
Colin Cherry
Dan Garrette
Reeve Ingle
Melvin Johnson
Dmitry Panteleev
Partha Talukdar
Data scarcity is a crucial issue for the development of highly multilingual NLP systems. Yet for many under-represented languages (ULs) -- l… (see more)anguages for which NLP re-search is particularly far behind in meeting user needs -- it is feasible to annotate small amounts of data. Motivated by this, we propose XTREME-UP, a benchmark defined by: its focus on the scarce-data scenario rather than zero-shot; its focus on user-centric tasks -- tasks with broad adoption by speakers of high-resource languages; and its focus on under-represented languages where this scarce-data scenario tends to be most realistic. XTREME-UP evaluates the capabilities of language models across 88 under-represented languages over 9 key user-centric technologies including ASR, OCR, MT, and information access tasks that are of general utility. We create new datasets for OCR, autocomplete, semantic parsing, and transliteration, and build on and refine existing datasets for other tasks. XTREME-UP provides methodology for evaluating many modeling scenarios including text-only, multi-modal (vision, audio, and text),supervised parameter tuning, and in-context learning. We evaluate commonly used models on the benchmark. We release all code and scripts to train and evaluate models
Learning domain-invariant classifiers for infant cry sounds
Charles Onu
Hemanth K. Sheetha
Arsenii Gorin
Active learning meets fractal decision boundaries: a cautionary tale from the Sitnikov three-body problem
Nicolas Payot
Mario Pasquato
Alessandro A. Trani
Chaotic systems such as the gravitational N-body problem are ubiquitous in astronomy. Machine learning (ML) is increasingly deployed to pred… (see more)ict the evolution of such systems, e.g. with the goal of speeding up simulations. Strategies such as active Learning (AL) are a natural choice to optimize ML training. Here we showcase an AL failure when predicting the stability of the Sitnikov three-body problem, the simplest case of N-body problem displaying chaotic behavior. We link this failure to the fractal nature of our classification problem's decision boundary. This is a potential pitfall in optimizing large sets of N-body simulations via AL in the context of star cluster physics, galactic dynamics, or cosmology.
Bayesian Imaging for Radio Interferometry with Score-Based Priors
No'e Dia
M. J. Yantovski-Barth
Alexandre Adam
Micah Bowles
Pablo Lemos
A. Scaife
U. Montŕeal
Ciela Institute
Flatiron Institute
Echoes in the Noise: Posterior Samples of Faint Galaxy Surface Brightness Profiles with Score-Based Likelihoods and Priors
Alexandre Adam
Connor Stone
Connor Bottrell
Ronan Legin
Examining the detailed structure of galaxy populations provides valuable insights into their formation and evolution mechanisms. Significant… (see more) barriers to such analysis are the non-trivial noise properties of real astronomical images and the point spread function (PSF) which blurs structure. Here we present a framework which combines recent advances in score-based likelihood characterization and diffusion model priors to perform a Bayesian analysis of image deconvolution. The method, when applied to minimally processed \emph{Hubble Space Telescope} (\emph{HST}) data, recovers structures which have otherwise only become visible in next-generation \emph{James Webb Space Telescope} (\emph{JWST}) imaging.