Publications

Gradient Masked Federated Optimization
Irene Tenison
Sreya Francis
hBERT + BiasCorp - Fighting Racism on the Web
Olawale Moses Onabola
Zhuang Ma
Xie Yang
Benjamin Akera
Ibraheem Abdulrahman
Jia Xue
Dianbo Liu
Subtle and overt racism is still present both in physical and online communities today and has impacted many lives in different segments of … (voir plus)the society. In this short piece of work, we present how we’re tackling this societal issue with Natural Language Processing. We are releasing BiasCorp, a dataset containing 139,090 comments and news segment from three specific sources - Fox News, BreitbartNews and YouTube. The first batch (45,000 manually annotated) is ready for publication. We are currently in the final phase of manually labeling the remaining dataset using Amazon Mechanical Turk. BERT has been used widely in several downstream tasks. In this work, we present hBERT, where we modify certain layers of the pretrained BERT model with the new Hopfield Layer. hBert generalizes well across different distributions with the added advantage of a reduced model complexity. We are also releasing a JavaScript library 3 and a Chrome Extension Application, to help developers make use of our trained model in web applications (say chat application) and for users to identify and report racially biased contents on the web respectively
INFOSHIELD: Generalizable Information-Theoretic Human-Trafficking Detection
Meng-Chieh Lee
Catalina Vajiac
Aayushi Kulshrestha
Sacha Lévy
Namyong Park
Cara Jones
Christos Faloutsos
Given a million escort advertisements, how can we spot near-duplicates? Such micro-clusters of ads are usually signals of human trafficking.… (voir plus) How can we summarize them, visually, to convince law enforcement to act? Can we build a general tool that works for different languages? Spotting micro-clusters of near-duplicate documents is useful in multiple, additional settings, including spam-bot detection in Twitter ads, plagiarism, and more.We present INFOSHIELD, which makes the following contributions: (a) Practical, being scalable and effective on real data, (b) Parameter-free and Principled, requiring no user-defined parameters, (c) Interpretable, finding a document to be the cluster representative, highlighting all the common phrases, and automatically detecting "slots", i.e. phrases that differ in every document; and (d) Generalizable, beating or matching domain-specific methods in Twitter bot detection and human trafficking detection respectively, as well as being language-independent finding clusters in Spanish, Italian, and Japanese. Interpretability is particularly important for the anti human-trafficking domain, where law enforcement must visually inspect ads.Our experiments on real data show that INFOSHIELD correctly identifies Twitter bots with an F1 score over 90% and detects human-trafficking ads with 84% precision. Moreover, it is scalable, requiring about 8 hours for 4 million documents on a stock laptop.
Characterizing Idioms: Conventionality and Contingency
Michaela Socolof
Michael Wagner
Idioms are unlike most phrases in two important ways. First, words in an idiom have non-canonical meanings. Second, the non-canonical meanin… (voir plus)gs of words in an idiom are contingent on the presence of other words in the idiom. Linguistic theories differ on whether these properties depend on one another, as well as whether special theoretical machinery is needed to accommodate idioms. We define two measures that correspond to the properties above, and we show that idioms fall at the expected intersection of the two dimensions, but that the dimensions themselves are not correlated. Our results suggest that introducing special machinery to handle idioms may not be warranted.
Ethics of Corporeal, Co-present Robots as Agents of Influence: a Review
Shalaleh Rismani
H. V. D. Van der Loos
Towards Causal Federated Learning For Enhanced Robustness and Privacy
Sreya Francis
Irene Tenison
Federated Learning is an emerging privacy-preserving distributed machine learning approach to building a shared model by performing distribu… (voir plus)ted training locally on participating devices (clients) and aggregating the local models into a global one. As this approach prevents data collection and aggregation, it helps in reducing associated privacy risks to a great extent. However, the data samples across all participating clients are usually not independent and identically distributed (non-iid), and Out of Distribution(OOD) generalization for the learned models can be poor. Besides this challenge, federated learning also remains vulnerable to various attacks on security wherein a few malicious participating entities work towards inserting backdoors, degrading the generated aggregated model as well as inferring the data owned by participating entities. In this paper, we propose an approach for learning invariant (causal) features common to all participating clients in a federated learning setup and analyze empirically how it enhances the Out of Distribution (OOD) accuracy as well as the privacy of the final learned model.
Science-Software Linkage: The Challenges of Traceability between Scientific Knowledge and Software Artifacts
Hideaki Hata
Raula Gaikovina Kula
Christoph Treude
Although computer science papers are often accompanied by software artifacts, connecting research papers to their software artifacts and vic… (voir plus)e versa is not always trivial. First of all, there is a lack of well-accepted standards for how such links should be provided. Furthermore, the provided links, if any, often become outdated: they are affected by link rot when pre-prints are removed, when repositories are migrated, or when papers and repositories evolve independently. In this paper, we summarize the state of the practice of linking research papers and associated source code, highlighting the recent efforts towards creating and maintaining such links. We also report on the results of several empirical studies focusing on the relationship between scientific papers and associated software artifacts, and we outline challenges related to traceability and opportunities for overcoming these challenges.
Common Limitations of Image Processing Metrics: A Picture Story
Annika Reinke
Matthias Eisenmann
Minu Dietlinde Tizabi
Carole H. Sudre
TIM RÄDSCH
Michela Antonelli
Spyridon Bakas
M. Jorge Cardoso
Veronika Cheplygina
Keyvan Farahani
B. Glocker
DOREEN HECKMANN-NÖTZEL
Fabian Isensee
Pierre Jannin
Charles E. Jr. Kahn
Jens Kleesiek
Tahsin Kurc
Michal Kozubek
Bennett Landman … (voir 14 de plus)
GEERT LITJENS
Klaus Maier-Hein
Bjoern Menze
Henning Müller
Jens Petersen
Mauricio Reyes
Nicola Rieke
Bram Stieltjes
R. Summers
Sotirios A. Tsaftaris
Bram van Ginneken
Annette Kopp-Schneider
PAUL F. JÄGER
Lena Maier-Hein
Maintenance of a collection of machines under partial observability: Indexability and computation of Whittle index
Nima Akbarzadeh
We consider the problem of scheduling maintenance for a collection of machines under partial observations when the state of each machine det… (voir plus)eriorates stochastically in a Markovian manner. We consider two observational models: first, the state of each machine is not observable at all, and second, the state of each machine is observable only if a service-person visits them. The agent takes a maintenance action, e.g., machine replacement, if he is chosen for the task. We model both problems as restless multi-armed bandit problem and propose the Whittle index policy for scheduling the visits. We show that both models are indexable. For the first model, we derive a closed-form expression for the Whittle index. For the second model, we propose an efficient algorithm to compute the Whittle index by exploiting the qualitative properties of the optimal policy. We present detailed numerical experiments which show that for multiple instances of the model, the Whittle index policy outperforms myopic policy and can be close-to-optimal in different setups.
Multi-tract multi-symptom relationships in pediatric concussion
Guido Ivan Guberman
Sonja Stojanovski
Eman Nishat
Alain Ptito
A. Wheeler
Maxime Descoteaux
The heterogeneity of white matter damage and symptoms in concussions has been identified as a major obstacle to therapeutic innovation. In c… (voir plus)ontrast, the vast majority of diffusion MRI studies on concussion have traditionally employed group-comparison approaches. Such studies do not consider heterogeneity of damage and symptoms in concussion. To parse concussion heterogeneity, the present study combines diffusion MRI (dMRI) and multivariate statistics to investigate multi-tract multi-symptom relationships. Using dMRI data from a sample of 306 children ages 9 and 10 with a history of concussion from the Adolescent Brain Cognitive Development Study (ABCD study), we built connectomes weighted by classical and emerging diffusion measures. These measures were combined into two informative indices, the first capturing a mixture of patterns suggestive of microstructural complexity, the second representing almost exclusively axonal density. We deployed pattern-learning algorithms to jointly decompose these connectivity features and 19 behavioural measures that capture well-known symptoms of concussions. We found idiosyncratic symptom-specific multi-tract connectivity features, which would not be captured in traditional univariate analyses. Multivariable connectome-symptom correspondences were stronger than all single-tract/single-symptom associations. Multi-tract connectivity features were also expressed equally across different sociodemographic strata and their expression was not accounted for by injury-related variables. In a replication dataset, the expression of multi-tract connectivity features predicted adverse psychiatric outcomes after accounting for other psychopathology-related variables. By defining cross-demographic multi-tract multi-symptom relationships to parse concussion heterogeneity, the present study can pave the way for the development of improved stratification strategies that may contribute to the success of future clinical trials and the improvement of concussion management.
Safe option-critic: learning safety in the option-critic architecture
Abstract Designing hierarchical reinforcement learning algorithms that exhibit safe behaviour is not only vital for practical applications b… (voir plus)ut also facilitates a better understanding of an agent’s decisions. We tackle this problem in the options framework (Sutton, Precup & Singh, 1999), a particular way to specify temporally abstract actions which allow an agent to use sub-policies with start and end conditions. We consider a behaviour as safe that avoids regions of state space with high uncertainty in the outcomes of actions. We propose an optimization objective that learns safe options by encouraging the agent to visit states with higher behavioural consistency. The proposed objective results in a trade-off between maximizing the standard expected return and minimizing the effect of model uncertainty in the return. We propose a policy gradient algorithm to optimize the constrained objective function. We examine the quantitative and qualitative behaviours of the proposed approach in a tabular grid world, continuous-state puddle world, and three games from the Arcade Learning Environment: Ms. Pacman, Amidar, and Q*Bert. Our approach achieves a reduction in the variance of return, boosts performance in environments with intrinsic variability in the reward structure, and compares favourably both with primitive actions and with risk-neutral options.
Comparing Transfer and Meta Learning Approaches on a Unified Few-Shot Classification Benchmark
Vincent Dumoulin
Neil Houlsby
Utku Evci
Xiaohua Zhai
Ross Goroshin
Sylvain Gelly
Meta and transfer learning are two successful families of approaches to few-shot learning. Despite highly related goals, state-of-the-art ad… (voir plus)vances in each family are measured largely in isolation of each other. As a result of diverging evaluation norms, a direct or thorough comparison of different approaches is challenging. To bridge this gap, we perform a cross-family study of the best transfer and meta learners on both a large-scale meta-learning benchmark (Meta-Dataset, MD), and a transfer learning benchmark (Visual Task Adaptation Benchmark, VTAB). We find that, on average, large-scale transfer methods (Big Transfer, BiT) outperform competing approaches on MD, even when trained only on ImageNet. In contrast, meta-learning approaches struggle to compete on VTAB when trained and validated on MD. However, BiT is not without limitations, and pushing for scale does not improve performance on highly out-of-distribution MD tasks. In performing this study, we reveal a number of discrepancies in evaluation norms and study some of these in light of the performance gap. We hope that this work facilitates sharing of insights from each community, and accelerates progress on few-shot learning.