Tracking white and grey matter degeneration along the spinal cord axis in degenerative cervical myelopathy
Kevin Vallotton
Gergely David
Markus Hupp
Nikolai Pfender
Michael Fehlings
Rebecca S. Samson
Claudia A. M. Gandini Wheeler-Kingshott
Armin Curt
Patrick Freund
Maryam Seif
Objective: To determine tissue-specific neurodegeneration across the spinal cord in patients with mild-moderate degenerative cervical myelop… (see more)athy (DCM). Methods: Twenty-four mild-moderate DCM and 24 healthy subjects were recruited. In patients, a T2-weighted scan was acquired at the compression site, while in all participants a T2*-weighted and diffusion-weighted scan was acquired at the cervical level (C2-C3) and in the lumbar enlargement (i.e. rostral and caudal to the site of compression). We quantified intramedullary signal changes, maximal canal and cord compression, white (WM) and grey matter (GM) atrophy, and microstructural indices from diffusion-weighted scans. All patients underwent clinical (modified Japanese Orthopaedic Association (mJOA)) and electrophysiological assessments. Regression analysis assessed associations between MRI readouts and electrophysiological and clinical outcomes. Results: Twenty patients were classified with mild and four with moderate DCM using the mJOA scale. The most frequent site of compression was at C5-C6 level with maximum cord compression of 4.68{+/-}0.83 mm. Ten patients showed imaging evidence of cervical myelopathy. In the cervical cord, WM and GM atrophy and WM microstructural changes were evident, while in the lumbar cord only WM showed atrophy and microstructural changes. Remote cervical cord WM microstructural changes were pronounced in patients with radiological myelopathy and associated with impaired electrophysiology. Lumbar cord WM atrophy was associated with lower limb sensory impairments. Conclusion: Tissue-specific neurodegeneration revealed by quantitative MRI, already apparent across the spinal cord in mild-moderate DCM prior to the onset of severe clinical impairments. WM microstructural changes are particularly sensitive to remote pathologically and clinically eloquent changes in DCM.
Gradient Masked Federated Optimization
Irene Tenison
Sreya Francis
hBERT + BiasCorp - Fighting Racism on the Web
Olawale Moses Onabola
Zhuang Ma
Xie Yang
Benjamin Akera
Ibraheem Abdulrahman
Jia Xue
Dianbo Liu
Subtle and overt racism is still present both in physical and online communities today and has impacted many lives in different segments of … (see more)the society. In this short piece of work, we present how we’re tackling this societal issue with Natural Language Processing. We are releasing BiasCorp, a dataset containing 139,090 comments and news segment from three specific sources - Fox News, BreitbartNews and YouTube. The first batch (45,000 manually annotated) is ready for publication. We are currently in the final phase of manually labeling the remaining dataset using Amazon Mechanical Turk. BERT has been used widely in several downstream tasks. In this work, we present hBERT, where we modify certain layers of the pretrained BERT model with the new Hopfield Layer. hBert generalizes well across different distributions with the added advantage of a reduced model complexity. We are also releasing a JavaScript library 3 and a Chrome Extension Application, to help developers make use of our trained model in web applications (say chat application) and for users to identify and report racially biased contents on the web respectively
INFOSHIELD: Generalizable Information-Theoretic Human-Trafficking Detection
Meng-Chieh Lee
Catalina Vajiac
Aayushi Kulshrestha
Sacha Lévy
Namyong Park
Cara Jones
Christos Faloutsos
Given a million escort advertisements, how can we spot near-duplicates? Such micro-clusters of ads are usually signals of human trafficking.… (see more) How can we summarize them, visually, to convince law enforcement to act? Can we build a general tool that works for different languages? Spotting micro-clusters of near-duplicate documents is useful in multiple, additional settings, including spam-bot detection in Twitter ads, plagiarism, and more.We present INFOSHIELD, which makes the following contributions: (a) Practical, being scalable and effective on real data, (b) Parameter-free and Principled, requiring no user-defined parameters, (c) Interpretable, finding a document to be the cluster representative, highlighting all the common phrases, and automatically detecting "slots", i.e. phrases that differ in every document; and (d) Generalizable, beating or matching domain-specific methods in Twitter bot detection and human trafficking detection respectively, as well as being language-independent finding clusters in Spanish, Italian, and Japanese. Interpretability is particularly important for the anti human-trafficking domain, where law enforcement must visually inspect ads.Our experiments on real data show that INFOSHIELD correctly identifies Twitter bots with an F1 score over 90% and detects human-trafficking ads with 84% precision. Moreover, it is scalable, requiring about 8 hours for 4 million documents on a stock laptop.
Characterizing Idioms: Conventionality and Contingency
Michaela Socolof
Michael Wagner
Idioms are unlike most phrases in two important ways. First, words in an idiom have non-canonical meanings. Second, the non-canonical meanin… (see more)gs of words in an idiom are contingent on the presence of other words in the idiom. Linguistic theories differ on whether these properties depend on one another, as well as whether special theoretical machinery is needed to accommodate idioms. We define two measures that correspond to the properties above, and we show that idioms fall at the expected intersection of the two dimensions, but that the dimensions themselves are not correlated. Our results suggest that introducing special machinery to handle idioms may not be warranted.
Ethics of Corporeal, Co-present Robots as Agents of Influence: a Review
Shalaleh Rismani
H. V. D. Van der Loos
Towards Causal Federated Learning For Enhanced Robustness and Privacy
Sreya Francis
Irene Tenison
Federated Learning is an emerging privacy-preserving distributed machine learning approach to building a shared model by performing distribu… (see more)ted training locally on participating devices (clients) and aggregating the local models into a global one. As this approach prevents data collection and aggregation, it helps in reducing associated privacy risks to a great extent. However, the data samples across all participating clients are usually not independent and identically distributed (non-iid), and Out of Distribution(OOD) generalization for the learned models can be poor. Besides this challenge, federated learning also remains vulnerable to various attacks on security wherein a few malicious participating entities work towards inserting backdoors, degrading the generated aggregated model as well as inferring the data owned by participating entities. In this paper, we propose an approach for learning invariant (causal) features common to all participating clients in a federated learning setup and analyze empirically how it enhances the Out of Distribution (OOD) accuracy as well as the privacy of the final learned model.
Science-Software Linkage: The Challenges of Traceability between Scientific Knowledge and Software Artifacts
Hideaki Hata
Raula Gaikovina Kula
Christoph Treude
Although computer science papers are often accompanied by software artifacts, connecting research papers to their software artifacts and vic… (see more)e versa is not always trivial. First of all, there is a lack of well-accepted standards for how such links should be provided. Furthermore, the provided links, if any, often become outdated: they are affected by link rot when pre-prints are removed, when repositories are migrated, or when papers and repositories evolve independently. In this paper, we summarize the state of the practice of linking research papers and associated source code, highlighting the recent efforts towards creating and maintaining such links. We also report on the results of several empirical studies focusing on the relationship between scientific papers and associated software artifacts, and we outline challenges related to traceability and opportunities for overcoming these challenges.
Common Limitations of Image Processing Metrics: A Picture Story
Annika Reinke
Matthias Eisenmann
Minu Dietlinde Tizabi
Carole H. Sudre
TIM RÄDSCH
Michela Antonelli
Spyridon Bakas
M. Jorge Cardoso
Veronika Cheplygina
Keyvan Farahani
B. Glocker
DOREEN HECKMANN-NÖTZEL
Fabian Isensee
Pierre Jannin
Charles E. Jr. Kahn
Jens Kleesiek
Tahsin Kurc
Michal Kozubek
Bennett Landman … (see 14 more)
GEERT LITJENS
Klaus Maier-Hein
Bjoern Menze
Henning Müller
Jens Petersen
Mauricio Reyes
Nicola Rieke
Bram Stieltjes
R. Summers
Sotirios A. Tsaftaris
Bram van Ginneken
Annette Kopp-Schneider
PAUL F. JÄGER
Lena Maier-Hein
Maintenance of a collection of machines under partial observability: Indexability and computation of Whittle index
Nima Akbarzadeh
We consider the problem of scheduling maintenance for a collection of machines under partial observations when the state of each machine det… (see more)eriorates stochastically in a Markovian manner. We consider two observational models: first, the state of each machine is not observable at all, and second, the state of each machine is observable only if a service-person visits them. The agent takes a maintenance action, e.g., machine replacement, if he is chosen for the task. We model both problems as restless multi-armed bandit problem and propose the Whittle index policy for scheduling the visits. We show that both models are indexable. For the first model, we derive a closed-form expression for the Whittle index. For the second model, we propose an efficient algorithm to compute the Whittle index by exploiting the qualitative properties of the optimal policy. We present detailed numerical experiments which show that for multiple instances of the model, the Whittle index policy outperforms myopic policy and can be close-to-optimal in different setups.
Multi-tract multi-symptom relationships in pediatric concussion
Guido Ivan Guberman
Sonja Stojanovski
Eman Nishat
Alain Ptito
A. Wheeler
Maxime Descoteaux
The heterogeneity of white matter damage and symptoms in concussions has been identified as a major obstacle to therapeutic innovation. In c… (see more)ontrast, the vast majority of diffusion MRI studies on concussion have traditionally employed group-comparison approaches. Such studies do not consider heterogeneity of damage and symptoms in concussion. To parse concussion heterogeneity, the present study combines diffusion MRI (dMRI) and multivariate statistics to investigate multi-tract multi-symptom relationships. Using dMRI data from a sample of 306 children ages 9 and 10 with a history of concussion from the Adolescent Brain Cognitive Development Study (ABCD study), we built connectomes weighted by classical and emerging diffusion measures. These measures were combined into two informative indices, the first capturing a mixture of patterns suggestive of microstructural complexity, the second representing almost exclusively axonal density. We deployed pattern-learning algorithms to jointly decompose these connectivity features and 19 behavioural measures that capture well-known symptoms of concussions. We found idiosyncratic symptom-specific multi-tract connectivity features, which would not be captured in traditional univariate analyses. Multivariable connectome-symptom correspondences were stronger than all single-tract/single-symptom associations. Multi-tract connectivity features were also expressed equally across different sociodemographic strata and their expression was not accounted for by injury-related variables. In a replication dataset, the expression of multi-tract connectivity features predicted adverse psychiatric outcomes after accounting for other psychopathology-related variables. By defining cross-demographic multi-tract multi-symptom relationships to parse concussion heterogeneity, the present study can pave the way for the development of improved stratification strategies that may contribute to the success of future clinical trials and the improvement of concussion management.
Safe option-critic: learning safety in the option-critic architecture
Abstract Designing hierarchical reinforcement learning algorithms that exhibit safe behaviour is not only vital for practical applications b… (see more)ut also facilitates a better understanding of an agent’s decisions. We tackle this problem in the options framework (Sutton, Precup & Singh, 1999), a particular way to specify temporally abstract actions which allow an agent to use sub-policies with start and end conditions. We consider a behaviour as safe that avoids regions of state space with high uncertainty in the outcomes of actions. We propose an optimization objective that learns safe options by encouraging the agent to visit states with higher behavioural consistency. The proposed objective results in a trade-off between maximizing the standard expected return and minimizing the effect of model uncertainty in the return. We propose a policy gradient algorithm to optimize the constrained objective function. We examine the quantitative and qualitative behaviours of the proposed approach in a tabular grid world, continuous-state puddle world, and three games from the Arcade Learning Environment: Ms. Pacman, Amidar, and Q*Bert. Our approach achieves a reduction in the variance of return, boosts performance in environments with intrinsic variability in the reward structure, and compares favourably both with primitive actions and with risk-neutral options.