Normalizing automatic spinal cord cross-sectional area measures
S. Bédard
Spinal cord cross-sectional area (CSA) is a relevant biomarker to assess spinal cord atrophy in various neurodegenerative diseases. However,… (voir plus) the considerable inter-subject variability among healthy participants currently limits its usage. Previous studies explored factors contributing to the variability, yet the normalization models were based on a relatively limited number of participants (typically 300 participants), required manual intervention, and were not implemented in an open-access comprehensive analysis pipeline. Another limitation is related to the imprecise prediction of the spinal levels when using vertebral levels as a reference; a question never addressed before in the search for a normalization method. In this study we implemented a method to measure CSA automatically from a spatial reference based on the central nervous system (the pontomedullary junction, PMJ), we investigated various factors to explain variability, and we developed normalization strategies on a large cohort (N=804). Cervical spinal cord CSA was computed on T1w MRI scans for 804 participants from the UK Biobank database. In addition to computing cross-sectional at the C2-C3 vertebral disc, it was also measured at 64 mm caudal from the PMJ. The effect of various biological, demographic and anatomical factors was explored by computing Pearson’s correlation coefficients. A stepwise linear regression found significant predictors; the coefficients of the best fit model were used to normalize CSA. The correlation between CSA measured at C2-C3 and using the PMJ was y = 0.98x + 1.78 (R2 = 0.97). The best normalization model included thalamus volume, brain volume, sex and interaction between brain volume and sex. With this model, the coefficient of variation went down from 10.09% (without normalization) to 8.59%, a reduction of 14.85%. In this study we identified factors explaining inter-subject variability of spinal cord CSA over a large cohort of participants, and developed a normalization model to reduce the variability. We implemented an approach, based on the PMJ, to measure CSA to overcome limitations associated with the vertebral reference. This approach warrants further validation, especially in longitudinal cohorts. The PMJ-based method and normalization models are readily available in the Spinal Cord Toolbox.
Reward is enough
David Silver
Satinder Singh
Richard S. Sutton
Reward is enough
David Silver
Satinder Singh
Richard S. Sutton
Reward is enough
David Silver
Satinder Singh
Richard S. Sutton
Reward is enough
David Silver
Satinder Singh
Richard S. Sutton
Season-Based Occupancy Prediction in Residential Buildings Using Machine Learning Models
Bowen Yang
Fariborz Haghighat
Karthik Panchabikesan
Small, correlated changes in synaptic connectivity may facilitate rapid motor learning
Barbara Feulner
Raeed H. Chowdhury
Lee Miller
Juan A. Gallego
Claudia Clopath
THE EFFECT SIZE OF GENES ON COGNITIVE ABILITIES IS LINKED TO THEIR EXPRESSION ALONG THE MAJOR HIERARCHICAL GRADIENT IN THE HUMAN BRAIN
Sébastien Jacquemont
Guillaume Huguet
Elise Douard
Zohra Saci
Laura Almasy
David C. Glahn
Trade-off Between Accuracy and Fairness of Data-driven Building and Indoor Environment Models: A Comparative Study of Pre-processing Methods
Ying Sun
Fariborz Haghighat
Trade-off Between Accuracy and Fairness of Data-driven Building and Indoor Environment Models: A Comparative Study of Pre-processing Methods
Ying Sun
Fariborz Haghighat
Transfer functions: learning about a lagged exposure-outcome association in time-series data
Hiroshi Mamiya
Alexandra M. Schmidt
Erica E. M. Moodie
Many population exposures in time-series analysis, including food marketing, exhibit a time-lagged association with population health outcom… (voir plus)es such as food purchasing. A common approach to measuring patterns of associations over different time lags relies on a finite-lag model, which requires correct specification of the maximum duration over which the lagged association extends. However, the maximum lag is frequently unknown due to the lack of substantive knowledge or the geographic variation of lag length. We describe a time-series analytical approach based on an infinite lag specification under a transfer function model that avoids the specification of an arbitrary maximum lag length. We demonstrate its application to estimate the lagged exposure-outcome association in food environmental research: display promotion of sugary beverages with lagged sales.
Graph Neural Networks in Natural Language Processing
Lingfei Wu
Natural language processing (NLP) and understanding aim to read from unformatted text to accomplish different tasks. While word embeddings l… (voir plus)earned by deep neural networks are widely used, the underlying linguistic and semantic structures of text pieces cannot be fully exploited in these representations. Graph is a natural way to capture the connections between different text pieces, such as entities, sentences, and documents. To overcome the limits in vector space models, researchers combine deep learning models with graph-structured representations for various tasks in NLP and text mining. Such combinations help to make full use of both the structural information in text and the representation learning ability of deep neural networks. In this chapter, we introduce the various graph representations that are extensively used in NLP, and show how different NLP tasks can be tackled from a graph perspective. We summarize recent research works on graph-based NLP, and discuss two case studies related to graph-based text clustering, matching, and multihop machine reading comprehension in detail. Finally, we provide a synthesis about the important open problems of this subfield.