Peu importe la taille : démocratiser la découverte de protéines avec l'IA
Des chercheurs de Mila ont créé un puissant modèle de langage protéique à source ouverte plus compact et efficace afin de démocratiser la découverte de protéines.
La prochaine cohorte de notre programme, conçu pour fournir aux participant·e·s une compréhension fondamentale des technologies de l'IA, se déroulera à Ottawa les 28 et 29 novembre.
Nous utilisons des témoins pour analyser le trafic et l’utilisation de notre site web, afin de personnaliser votre expérience. Vous pouvez désactiver ces technologies à tout moment, mais cela peut restreindre certaines fonctionnalités du site. Consultez notre Politique de protection de la vie privée pour en savoir plus.
Paramètre des cookies
Vous pouvez activer et désactiver les types de cookies que vous souhaitez accepter. Cependant certains choix que vous ferez pourraient affecter les services proposés sur nos sites (ex : suggestions, annonces personnalisées, etc.).
Cookies essentiels
Ces cookies sont nécessaires au fonctionnement du site et ne peuvent être désactivés. (Toujours actif)
Cookies analyse
Acceptez-vous l'utilisation de cookies pour mesurer l'audience de nos sites ?
Multimedia Player
Acceptez-vous l'utilisation de cookies pour afficher et vous permettre de regarder les contenus vidéo hébergés par nos partenaires (YouTube, etc.) ?
Suicide is a complex, multidimensional event, and a significant challenge for prevention globally. Artificial intelligence (AI) and machine … (voir plus)learning (ML) have emerged to harness large-scale datasets to enhance risk detection. In order to trust and act upon the predictions made with ML, more intuitive user interfaces must be validated. Thus, Interpretable AI is one of the crucial directions which could allow policy and decision makers to make reasonable and data-driven decisions that can ultimately lead to better mental health services planning and suicide prevention. This research aimed to develop sex-specific ML models for predicting the population risk of suicide and to interpret the models. Data were from the Quebec Integrated Chronic Disease Surveillance System (QICDSS), covering up to 98% of the population in the province of Quebec and containing data for over 20,000 suicides between 2002 and 2019. We employed a case-control study design. Individuals were considered cases if they were aged 15+ and had died from suicide between January 1st, 2002, and December 31st, 2019 (n = 18339). Controls were a random sample of 1% of the Quebec population aged 15+ of each year, who were alive on December 31st of each year, from 2002 to 2019 (n = 1,307,370). We included 103 features, including individual, programmatic, systemic, and community factors, measured up to five years prior to the suicide events. We trained and then validated the sex-specific predictive risk model using supervised ML algorithms, including Logistic Regression (LR), Random Forest (RF), Extreme Gradient Boosting (XGBoost) and Multilayer perceptron (MLP). We computed operating characteristics, including sensitivity, specificity, and Positive Predictive Value (PPV). We then generated receiver operating characteristic (ROC) curves to predict suicides and calibration measures. For interpretability, Shapley Additive Explanations (SHAP) was used with the global explanation to determine how much the input features contribute to the models’ output and the largest absolute coefficients. The best sensitivity was 0.38 with logistic regression for males and 0.47 with MLP for females; the XGBoost Classifier with 0.25 for males and 0.19 for females had the best precision (PPV). This study demonstrated the useful potential of explainable AI models as tools for decision-making and population-level suicide prevention actions. The ML models included individual, programmatic, systemic, and community levels variables available routinely to decision makers and planners in a public managed care system. Caution shall be exercised in the interpretation of variables associated in a predictive model since they are not causal, and other designs are required to establish the value of individual treatments. The next steps are to produce an intuitive user interface for decision makers, planners and other stakeholders like clinicians or representatives of families and people with live experience of suicidal behaviors or death by suicide. For example, how variations in the quality of local area primary care programs for depression or substance use disorders or increased in regional mental health and addiction budgets would lower suicide rates.
Few-shot learning has recently attracted significant interest in drug discovery, with a recent, fast-growing literature mostly involving con… (voir plus)voluted meta-learning strategies. We revisit the more straightforward fine-tuning approach for molecular data, and propose a regularized quadratic-probe loss based on the the Mahalanobis distance. We design a dedicated block-coordinate descent optimizer, which avoid the degenerate solutions of our loss. Interestingly, our simple fine-tuning approach achieves highly competitive performances in comparison to state-of-the-art methods, while being applicable to black-box settings and removing the need for specific episodic pre-training strategies. Furthermore, we introduce a new benchmark to assess the robustness of the competing methods to domain shifts. In this setting, our fine-tuning baseline obtains consistently better results than meta-learning methods.
At least half of the world's population do not have access to essential health services. Worse, large numbers of households are being pushed… (voir plus) into poverty because they must pay for health care out of their own pockets.
This chapter explores the foundational concept of robustness in Machine Learning (ML) and its integral role in establishing trustworthiness … (voir plus)in Artificial Intelligence (AI) systems. The discussion begins with a detailed definition of robustness, portraying it as the ability of ML models to maintain stable performance across varied and unexpected environmental conditions. ML robustness is dissected through several lenses: its complementarity with generalizability; its status as a requirement for trustworthy AI; its adversarial vs non-adversarial aspects; its quantitative metrics; and its indicators such as reproducibility and explainability. The chapter delves into the factors that impede robustness, such as data bias, model complexity, and the pitfalls of underspecified ML pipelines. It surveys key techniques for robustness assessment from a broad perspective, including adversarial attacks, encompassing both digital and physical realms. It covers non-adversarial data shifts and nuances of Deep Learning (DL) software testing methodologies. The discussion progresses to explore amelioration strategies for bolstering robustness, starting with data-centric approaches like debiasing and augmentation. Further examination includes a variety of model-centric methods such as transfer learning, adversarial training, and randomized smoothing. Lastly, post-training methods are discussed, including ensemble techniques, pruning, and model repairs, emerging as cost-effective strategies to make models more resilient against the unpredictable. This chapter underscores the ongoing challenges and limitations in estimating and achieving ML robustness by existing approaches. It offers insights and directions for future research on this crucial concept, as a prerequisite for trustworthy AI systems.