Join us on the Venture Scientist Bootcamp, a full time, 4-month incubator at Mila, built specifically for deep tech founders with elite STEM backgrounds.
Learn how to leverage generative AI to support and improve your productivity at work. The next cohort will take place online on April 28 and 30, 2026, in French.
We use cookies to analyze the browsing and usage of our website and to personalize your experience. You can disable these technologies at any time, but this may limit certain functionalities of the site. Read our Privacy Policy for more information.
Setting cookies
You can enable and disable the types of cookies you wish to accept. However certain choices you make could affect the services offered on our sites (e.g. suggestions, personalised ads, etc.).
Essential cookies
These cookies are necessary for the operation of the site and cannot be deactivated. (Still active)
Analytics cookies
Do you accept the use of cookies to measure the audience of our sites?
Multimedia Player
Do you accept the use of cookies to display and allow you to watch the video content hosted by our partners (YouTube, etc.)?
Objective This study evaluates multiple machine learning approaches to predict metabolic syndrome (MetS) risk in the Quebec, Canada populati… (see more)on. We further perform explainability analysis to interpret model predictions and identify key features driving risk classification. Methods and analysis This study followed the Minimum Information about Clinical Artificial Intelligence Modeling (MI-CLAIM) guideline for reporting. We used cross-sectional data from the Canadian Community Health Survey (2015–2018) for the population living in the province of Quebec, which includes 42,279 participants. Partial sampling was used to obtain a balanced dataset for model development. We evaluated seven machine learning models for the defined classification task, including Logistic Regression, XGBoost, LightGBM, TabNet, NODE, 1D-CNN and Regularisation Cocktails. Performance was assessed using accuracy, precision, recall, F1-score, AUROC, and AUPRC, and interpretability was examined using SHAP to identify key predictors of MetS risk. Results After partial sampling, 7,866 participants (4,856 high-risk and 3,010 low-risk MetS cases) were included in the machine learning analysis. XGBoost and NODE showed the strongest performance. XGBoost achieved the highest accuracy (80.4%) and AUROC (84.1%), while NODE achieved the highest precision (80.1%) and AUPRC (86.0%). Explainability analysis identified age, perceived health, and sex as the most important features contributing to MetS risk predictions. Conclusion This study shows that machine learning can accurately predict MetS risk using self-reported health survey data from the Quebec population. Comparison of classical and deep learning approaches identified the optimal predictive model, and explainability analyses identified the most important features contributing to the risk predictions, which align with established clinical evidence. These results support a machine learning–driven initial screening framework for population-level early identification of high-risk individuals, enabling targeted interventions and efficient allocation of healthcare resources.
Artificial intelligence (AI) is increasingly used in healthcare to support the prevention and management of cardiovascular disease (CVD); ho… (see more)wever, its ethical implications in clinical practice, particularly for female patients, remain insufficiently explored. This study aimed to explore clinicians' perspectives on the ethical use of AI for preventing and managing cardiovascular disease (CVD) in female patients. A qualitative descriptive design was employed using semi-structured interviews with clinicians practicing in Montreal, Canada. Interviews were conducted online, audio-recorded with participants’ consent, and transcribed for analysis. Data were analyzed using deductive thematic analysis informed by ethical domains in the established AI frameworks. Ethical approval was obtained from McGill University’s Research Ethics Board. The study adhered to the Consolidated Criteria for Reporting Qualitative Research (COREQ) guidelines. A final sample of twelve clinicians was interviewed, with each interview lasting approximately 60 minutes. Four key themes emerged: fairness, privacy and security, explainability, and data integrity. Clinicians expressed concerns that AI-enabled technologies may introduce or reinforce biases affecting certain populations, including older adults, individuals with limited digital literacy, and those lacking reliable internet access or access to digital technologies. Participants also raised concerns regarding data integrity, privacy and security, and emphasized the importance of transparent and understandable AI outputs to support clinical decision-making. Ethical considerations are fundamental to the responsible integration of AI in cardiovascular care. Addressing concerns related to fairness, privacy and security, explainability, and data integrity may strengthen clinician trust and support the implementation of AI-enabled technologies in clinical practice. Future research should explore practical approaches to address these concerns and assess how ethically informed AI systems can be implemented effectively in clinical practice.
This study aimed to (1) explore clinicians’ perspectives of cardiovascular disease (CVD) and risk management in female patients and (2) de… (see more)scribe clinicians’ needs and desired features in AI-enabled tools for primary prevention and management of CVD among female patients. This work employed a qualitative description design. We conducted semi-structured interviews with 12 clinicians in Montreal, Canada. We used inductive thematic analysis to interpret the data. Seven themes emerged from the analysis. Three themes were related to the first objective: complexity in clinical decision-making, limitations of CVD risk assessment tools, and resources and health literacy. Four themes were related to the second objective: AI efficiency, multilingual design, electronic medical record integration, and ease of use. Clinicians reported challenges in supporting female patients at higher risk for CVD and expressed concerns about existing decision support tools. They showed openness to AI-enabled tools like Xi-Care and provided input on desired features to ensure their usability and effectiveness. There is a demand to support clinicians in the primary prevention and management of CVD among female patients. AI-enabled tools could effectively address this demand, provided their development prioritizes clinicians' needs and perspectives to ensure safe and effective implementation.
The lack of Equity, Diversity, and Inclusion (EDI) principles in the lifecycle of Artificial Intelligence (AI) technologies in healthcare is… (see more) a growing concern. Despite its importance, there is still a gap in understanding the initiatives undertaken to address this issue. This review aims to explore what and how EDI principles have been integrated into the design, development, and implementation of AI studies in healthcare. We followed the scoping review framework by Levac et al. and the Joanna Briggs Institute. A comprehensive search was conducted until April 29, 2022, across MEDLINE, Embase, PsycInfo, Scopus, and SCI-EXPANDED. Only research studies in which the integration of EDI in AI was the primary focus were included. Non-research articles were excluded. Two independent reviewers screened the abstracts and full texts, resolving disagreements by consensus or by consulting a third reviewer. To synthesize the findings, we conducted a thematic analysis and used a narrative description. We adhered to the PRISMA-ScR checklist for reporting scoping reviews. The search yielded 10,664 records, with 42 studies included. Most studies were conducted on the American population. Previous research has shown that AI models improve when socio-demographic factors such as gender and race are considered. Despite frameworks for EDI integration, no comprehensive approach systematically applies EDI principles in AI model development. Additionally, the integration of EDI into the AI implementation phase remains under-explored, and the representation of EDI within AI teams has been overlooked. This review reports on what and how EDI principles have been integrated into the design, development, and implementation of AI technologies in healthcare. We used a thorough search strategy and rigorous methodology, though we acknowledge limitations such as language and publication bias. A comprehensive framework is needed to ensure that EDI principles are considered throughout the AI lifecycle. Future research could focus on strategies to reduce algorithmic bias, assess the long-term impact of EDI integration, and explore policy implications to ensure that AI technologies are ethical, responsible, and beneficial for all.