Peu importe la taille : démocratiser la découverte de protéines avec l'IA
Des chercheurs de Mila ont créé un puissant modèle de langage protéique à source ouverte plus compact et efficace afin de démocratiser la découverte de protéines.
La prochaine cohorte de notre programme, conçu pour fournir aux participant·e·s une compréhension fondamentale des technologies de l'IA, se déroulera à Ottawa les 28 et 29 novembre.
Nous utilisons des témoins pour analyser le trafic et l’utilisation de notre site web, afin de personnaliser votre expérience. Vous pouvez désactiver ces technologies à tout moment, mais cela peut restreindre certaines fonctionnalités du site. Consultez notre Politique de protection de la vie privée pour en savoir plus.
Paramètre des cookies
Vous pouvez activer et désactiver les types de cookies que vous souhaitez accepter. Cependant certains choix que vous ferez pourraient affecter les services proposés sur nos sites (ex : suggestions, annonces personnalisées, etc.).
Cookies essentiels
Ces cookies sont nécessaires au fonctionnement du site et ne peuvent être désactivés. (Toujours actif)
Cookies analyse
Acceptez-vous l'utilisation de cookies pour mesurer l'audience de nos sites ?
Multimedia Player
Acceptez-vous l'utilisation de cookies pour afficher et vous permettre de regarder les contenus vidéo hébergés par nos partenaires (YouTube, etc.) ?
Publications
Transfer functions: learning about a lagged exposure-outcome association in time-series data
Many population exposures in time-series analysis, including food marketing, exhibit a time-lagged association with population health outcom… (voir plus)es such as food purchasing. A common approach to measuring patterns of associations over different time lags relies on a finite-lag model, which requires correct specification of the maximum duration over which the lagged association extends. However, the maximum lag is frequently unknown due to the lack of substantive knowledge or the geographic variation of lag length. We describe a time-series analytical approach based on an infinite lag specification under a transfer function model that avoids the specification of an arbitrary maximum lag length. We demonstrate its application to estimate the lagged exposure-outcome association in food environmental research: display promotion of sugary beverages with lagged sales.
Natural language processing (NLP) and understanding aim to read from unformatted text to accomplish different tasks. While word embeddings l… (voir plus)earned by deep neural networks are widely used, the underlying linguistic and semantic structures of text pieces cannot be fully exploited in these representations. Graph is a natural way to capture the connections between different text pieces, such as entities, sentences, and documents. To overcome the limits in vector space models, researchers combine deep learning models with graph-structured representations for various tasks in NLP and text mining. Such combinations help to make full use of both the structural information in text and the representation learning ability of deep neural networks. In this chapter, we introduce the various graph representations that are extensively used in NLP, and show how different NLP tasks can be tackled from a graph perspective. We summarize recent research works on graph-based NLP, and discuss two case studies related to graph-based text clustering, matching, and multihop machine reading comprehension in detail. Finally, we provide a synthesis about the important open problems of this subfield.
The genome of the Severe Acute Respiratory Syndrome coronavirus 2 (SARS-CoV-2), the pathogen that causes coronavirus disease 2019 (COVID-19)… (voir plus), has been sequenced at an unprecedented scale, leading to a tremendous amount of viral genome sequencing data. To understand the evolution of this virus in humans, and to assist in tracing infection pathways and designing preventive strategies, we present a set of computational tools that span phylogenomics, population genetics and machine learning approaches. To illustrate the utility of this toolbox, we detail an in depth analysis of the genetic diversity of SARS-CoV-2 in first year of the COVID-19 pandemic, using 329,854 high-quality consensus sequences published in the GISAID database during the pre-vaccination phase. We demonstrate that, compared to standard phylogenetic approaches, haplotype networks can be computed efficiently on much larger datasets, enabling real-time analyses. Furthermore, time series change of Tajima’s D provides a powerful metric of population expansion. Unsupervised learning techniques further highlight key steps in variant detection and facilitate the study of the role of this genomic variation in the context of SARS-CoV-2 infection, with Multiscale PHATE methodology identifying fine-scale structure in the SARS-CoV-2 genetic data that underlies the emergence of key lineages. The computational framework presented here is useful for real-time genomic surveillance of SARS-CoV-2 and could be applied to any pathogen that threatens the health of worldwide populations of humans and other organisms.
Alzheimer’s disease and related dementias is a major public health burden – compounding over upcoming years due to longevity. Recently, … (voir plus)clinical evidence hinted at the experience of social isolation in expediting dementia onset. In 502,506 UK Biobank participants and 30,097 participants from the Canadian Longitudinal Study of Aging, we revisited traditional risk factors for developing dementia in the context of loneliness and lacking social support. Across these measures of subjective and objective social deprivation, we have identified strong links between individuals’ social capital and various indicators of Alzheimer’s disease and related dementias risk, which replicated across both population cohorts. The quality and quantity of daily social encounters had deep connections with key aetiopathological factors, which represent 1) personal habits and lifestyle factors, 2) physical health, 3) mental health, and 4) societal and external factors. Our population-scale assessment suggest that social lifestyle determinants are linked to most neurodegeneration risk factors, highlighting them promising targets for preventive clinical action.
The random utility maximization model is by far the most adopted framework to estimate consumer choice behavior. However, behavioral economi… (voir plus)cs has provided strong empirical evidence of irrational choice behaviors, such as halo effects, that are incompatible with this framework. Models belonging to the random utility maximization family may therefore not accurately capture such irrational behavior. Hence, more general choice models, overcoming such limitations, have been proposed. However, the flexibility of such models comes at the price of increased risk of overfitting. As such, estimating such models remains a challenge. In this work, we propose an estimation method for the recently proposed generalized stochastic preference choice model, which subsumes the family of random utility maximization models and is capable of capturing halo effects. In particular, we propose a column-generation method to gradually refine the discrete choice model based on partially ranked preference sequences. Extensive computational experiments indicate that our model, explicitly accounting for irrational preferences, can significantly boost the predictive accuracy on both synthetic and real-world data instances. Summary of Contribution: In this work, we propose an estimation method for the recently proposed generalized stochastic preference choice model, which subsumes the family of random utility maximization models and is capable of capturing halo effects. Specifically, we show how to use partially ranked preferences to efficiently model rational and irrational customer types from transaction data. Our estimation procedure is based on column generation, where relevant customer types are efficiently extracted by expanding a treelike data structure containing the customer behaviors. Furthermore, we propose a new dominance rule among customer types whose effect is to prioritize low orders of interactions among products. An extensive set of experiments assesses the predictive accuracy of the proposed approach by comparing it against rank-based methods with only rational preferences and with more general benchmarks from the literature. Our results show that accounting for irrational preferences can boost predictive accuracy by 12.5% on average when tested on a real-world data set from a large chain of grocery and drug stores.
Purpose A major obstacle to the clinical implementation of quantitative MR is the lengthy acquisition time required to derive multi-contrast… (voir plus) parametric maps. We sought to reduce the acquisition time for quantitative susceptibility mapping (QSM) and macromolecular tissue volume (MTV) by acquiring both contrasts simultaneously by leveraging their redundancies. The Joint Virtual Coil concept with generalized autocalibrating partially parallel acquisitions (JVC-GRAPPA) was applied to reduce acquisition time further. Methods Three adult volunteers were imaged on a 3T scanner using a multi-echo 3D GRE sequence acquired at three head orientations. MTV, QSM, R2*, T1, and proton density maps were reconstructed. The same sequence (GRAPPA R=4) was performed in subject #1 with a single head orientation for comparison. Fully sampled data was acquired in subject #2, from which retrospective undersampling was performed (R=6 GRAPPA and R=9 JVC-GRAPPA). Prospective undersampling was performed in subject #3 (R=6 GRAPPA and R=9 JVC-GRAPPA) using gradient blips to shift k-space sampling in later echoes. Results Subject #1’s multi-orientation and single-orientation MTV maps were not significantly different based on RMSE. For subject #2, the retrospectively undersampled JVC-GRAPPA and GRAPPA generated similar results as fully sampled data. This approach was validated with the prospectively undersampled images in subject #3. Using QSM, R2*, and MTV, the contributions of myelin and iron content to susceptibility was estimated. Conclusion We have developed a novel strategy to simultaneously acquire data for the reconstruction of five intrinsically co-registered 1-mm isotropic resolution multi-parametric maps, with a scan time of 6 minutes using JVC-GRAPPA.
Background Research on the integration of artificial intelligence (AI) into community-based primary health care (CBPHC) has highlighted seve… (voir plus)ral advantages and disadvantages in practice regarding, for example, facilitating diagnosis and disease management, as well as doubts concerning the unintended harmful effects of this integration. However, there is a lack of evidence about a comprehensive knowledge synthesis that could shed light on AI systems tested or implemented in CBPHC. Objective We intended to identify and evaluate published studies that have tested or implemented AI in CBPHC settings. Methods We conducted a systematic scoping review informed by an earlier study and the Joanna Briggs Institute (JBI) scoping review framework and reported the findings according to PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analysis-Scoping Reviews) reporting guidelines. An information specialist performed a comprehensive search from the date of inception until February 2020, in seven bibliographic databases: Cochrane Library, MEDLINE, EMBASE, Web of Science, Cumulative Index to Nursing and Allied Health Literature (CINAHL), ScienceDirect, and IEEE Xplore. The selected studies considered all populations who provide and receive care in CBPHC settings, AI interventions that had been implemented, tested, or both, and assessed outcomes related to patients, health care providers, or CBPHC systems. Risk of bias was assessed using the Prediction Model Risk of Bias Assessment Tool (PROBAST). Two authors independently screened the titles and abstracts of the identified records, read the selected full texts, and extracted data from the included studies using a validated extraction form. Disagreements were resolved by consensus, and if this was not possible, the opinion of a third reviewer was sought. A third reviewer also validated all the extracted data. Results We retrieved 22,113 documents. After the removal of duplicates, 16,870 documents were screened, and 90 peer-reviewed publications met our inclusion criteria. Machine learning (ML) (41/90, 45%), natural language processing (NLP) (24/90, 27%), and expert systems (17/90, 19%) were the most commonly studied AI interventions. These were primarily implemented for diagnosis, detection, or surveillance purposes. Neural networks (ie, convolutional neural networks and abductive networks) demonstrated the highest accuracy, considering the given database for the given clinical task. The risk of bias in diagnosis or prognosis studies was the lowest in the participant category (4/49, 4%) and the highest in the outcome category (22/49, 45%). Conclusions We observed variabilities in reporting the participants, types of AI methods, analyses, and outcomes, and highlighted the large gap in the effective development and implementation of AI in CBPHC. Further studies are needed to efficiently guide the development and implementation of AI interventions in CBPHC settings.