Nous utilisons des témoins pour analyser le trafic et l’utilisation de notre site web, afin de personnaliser votre expérience. Vous pouvez désactiver ces technologies à tout moment, mais cela peut restreindre certaines fonctionnalités du site. Consultez notre Politique de protection de la vie privée pour en savoir plus.
Paramètre des cookies
Vous pouvez activer et désactiver les types de cookies que vous souhaitez accepter. Cependant certains choix que vous ferez pourraient affecter les services proposés sur nos sites (ex : suggestions, annonces personnalisées, etc.).
Cookies essentiels
Ces cookies sont nécessaires au fonctionnement du site et ne peuvent être désactivés. (Toujours actif)
Cookies analyse
Acceptez-vous l'utilisation de cookies pour mesurer l'audience de nos sites ?
Multimedia Player
Acceptez-vous l'utilisation de cookies pour afficher et vous permettre de regarder les contenus vidéo hébergés par nos partenaires (YouTube, etc.) ?
Publications
Normalizing automatic spinal cord cross-sectional area measures
Spinal cord cross-sectional area (CSA) is a relevant biomarker to assess spinal cord atrophy in various neurodegenerative diseases. However,… (voir plus) the considerable inter-subject variability among healthy participants currently limits its usage. Previous studies explored factors contributing to the variability, yet the normalization models were based on a relatively limited number of participants (typically 300 participants), required manual intervention, and were not implemented in an open-access comprehensive analysis pipeline. Another limitation is related to the imprecise prediction of the spinal levels when using vertebral levels as a reference; a question never addressed before in the search for a normalization method. In this study we implemented a method to measure CSA automatically from a spatial reference based on the central nervous system (the pontomedullary junction, PMJ), we investigated various factors to explain variability, and we developed normalization strategies on a large cohort (N=804). Cervical spinal cord CSA was computed on T1w MRI scans for 804 participants from the UK Biobank database. In addition to computing cross-sectional at the C2-C3 vertebral disc, it was also measured at 64 mm caudal from the PMJ. The effect of various biological, demographic and anatomical factors was explored by computing Pearson’s correlation coefficients. A stepwise linear regression found significant predictors; the coefficients of the best fit model were used to normalize CSA. The correlation between CSA measured at C2-C3 and using the PMJ was y = 0.98x + 1.78 (R2 = 0.97). The best normalization model included thalamus volume, brain volume, sex and interaction between brain volume and sex. With this model, the coefficient of variation went down from 10.09% (without normalization) to 8.59%, a reduction of 14.85%. In this study we identified factors explaining inter-subject variability of spinal cord CSA over a large cohort of participants, and developed a normalization model to reduce the variability. We implemented an approach, based on the PMJ, to measure CSA to overcome limitations associated with the vertebral reference. This approach warrants further validation, especially in longitudinal cohorts. The PMJ-based method and normalization models are readily available in the Spinal Cord Toolbox.
Many population exposures in time-series analysis, including food marketing, exhibit a time-lagged association with population health outcom… (voir plus)es such as food purchasing. A common approach to measuring patterns of associations over different time lags relies on a finite-lag model, which requires correct specification of the maximum duration over which the lagged association extends. However, the maximum lag is frequently unknown due to the lack of substantive knowledge or the geographic variation of lag length. We describe a time-series analytical approach based on an infinite lag specification under a transfer function model that avoids the specification of an arbitrary maximum lag length. We demonstrate its application to estimate the lagged exposure-outcome association in food environmental research: display promotion of sugary beverages with lagged sales.
Natural language processing (NLP) and understanding aim to read from unformatted text to accomplish different tasks. While word embeddings l… (voir plus)earned by deep neural networks are widely used, the underlying linguistic and semantic structures of text pieces cannot be fully exploited in these representations. Graph is a natural way to capture the connections between different text pieces, such as entities, sentences, and documents. To overcome the limits in vector space models, researchers combine deep learning models with graph-structured representations for various tasks in NLP and text mining. Such combinations help to make full use of both the structural information in text and the representation learning ability of deep neural networks. In this chapter, we introduce the various graph representations that are extensively used in NLP, and show how different NLP tasks can be tackled from a graph perspective. We summarize recent research works on graph-based NLP, and discuss two case studies related to graph-based text clustering, matching, and multihop machine reading comprehension in detail. Finally, we provide a synthesis about the important open problems of this subfield.