Publications

Derivation and validation of indices incorporating vasopressor dose and blood pressure values over time
Alain Gervais
François Lamontagne
Jean-Baptiste Michaud
KJ Neill
Adhikari
Jean-Michel Pagé
Marie-Hélène Masse
Michael O. Harhay
Michael Chassé
Félix Lamontagne
Katia Laforge
Alexandra Fortin
Marc-André Leclair
Simon Lévesque
Marie-Pier Domingue
Neda Momenzadeh
Ruxandra Pinto
Maxime Morin-Lavoie
Félix Camirand Lemyre
MD MSc. François Lamontagne
Rationale The blood pressure value below which the benefits of vasopressors clearly outweigh their disadvantages is uncertain. Objectives Th… (voir plus)e main objective of this analysis was to investigate the statistical properties and potential utility of indices estimating the vasopressor dose-rates as a function of blood pressure values over time. Methods In this single-center observational study, we collected blood pressure values from intensive care unit (ICU) monitors and norepinephrine dose-rates from infusion pumps corresponding to a derivation and a validation cohort. Patients included in each cohort were 18 years or older and received norepinephrine in the ICU. We defined and derived indices corresponding to vasopressor therapy above (>65 mmHg) and below (60 mmHg) targets. We report the distribution of both indices over time from both cohorts as well as their associations with hospital mortal
Not Only the Last-Layer Features for Spurious Correlations: All Layer Deep Feature Reweighting
Spurious correlations are a major source of errors for machine learning models, in particular when aiming for group-level fairness. It has b… (voir plus)een recently shown that a powerful approach to combat spurious correlations is to re-train the last layer on a balanced validation dataset, isolating robust features for the predictor. However, key attributes can sometimes be discarded by neural networks towards the last layer. In this work, we thus consider retraining a classifier on a set of features derived from all layers. We utilize a recently proposed feature selection strategy to select unbiased features from all the layers. We observe this approach gives significant improvements in worst-group accuracy on several standard benchmarks.
Protein Language Models: Is Scaling Necessary?
Robert M. Vernon
Benjamin Schulz
Christopher James Langmead
Public protein sequence databases contain samples from the fitness landscape explored by nature. Protein language models (pLMs) pre-trained … (voir plus)on these sequences aim to capture this landscape for tasks like property prediction and protein design. Following the same trend as in natural language processing, pLMs have continuously been scaled up. However, the premise that scale leads to better performance assumes that source databases provide an accurate representation of the underlying fitness landscape, which is likely false. By developing an efficient codebase, designing a modern architecture, and addressing data quality concerns such as sample bias, we introduce AMPLIFY, a best-in-class pLM that is orders of magnitude less expensive to train and deploy than previous models. Furthermore, to support the scientific community and democratize the training of pLMs, we have open-sourced AMPLIFY’s pre-training codebase, data, and model checkpoints.
Self Supervised Dictionary Learning Using Kernel Matching
Shubham Choudhary
Demba Ba
We introduce a self supervised framework for learning representations in the context of dictionary learning. We cast the problem as a kernel… (voir plus) matching task between the input and the representation space, with constraints on the latent kernel. By adjusting these constraints, we demonstrate how the framework can adapt to different learning objectives. We then formulate a novel Alternate Direction Method of Multipli-ers (ADMM) based algorithm to solve the optimization problem and connect the dynamics to classical alternate minimization techniques. This approach offers a unique way of learning representations with kernel constraints, that enable us implicitly learn a generative map for the data from the learned representations which can have broad applications in representation learning tasks both in machine learning and neuro-science.
Al content detection in the emerging information ecosystem: new obligations for media and tech companies
Alistair Knott
Dino Pedreschi
Toshiya Jitsuzumi
Susan Leavy
David Eyers
Tapabrata Chakraborti
Andrew Trotman
Sundar Sundareswaran
Ricardo Baeza-Yates
Przemyslaw Biecek
Adrian Weller
Paul D. Teal
Subhadip Basu
Mehmet Haklidir
Virginia Morini
Stuart Russell
ToxiSight: Insights Towards Detected Chat Toxicity
We present a comprehensive explainability dashboard designed for in-game chat toxicity. This dashboard integrates various existing explainab… (voir plus)le AI (XAI) techniques, including token importance analysis, model output visualization, and attribution to the training dataset. It also provides insights through the closest positive and negative examples, facilitating a deeper understanding and potential correction of the training data. Additionally, the dashboard includes word sense analysis—particularly useful for new moderators—and offers free-text explanations for both positive and negative predictions. This multi-faceted approach enhances the interpretability and transparency of toxicity detection models.
ChainBuddy: An AI Agent System for Generating LLM Pipelines
Development of small, cost‐efficient scintillating fiber detectors for automated synthesis of positron emission tomography radiopharmaceuticals
Hailey Ahn
Liam Carroll
Robert Hopewell
I-Huang Tsai
Dean Jolly
Gassan Massarweh
S. Enger
The Bifurcation Method: White-Box Observation Perturbation Attacks on Reinforcement Learning Agents on a Cyber Physical System
KIERNAN BRODA-MILIAN
Ranwa Al Mallah
Diagnostic tests for infections in critically ill immunocompromised patients
Adrien Joseph
Lara Zafrani
Dynamic HumTrans: Humming Transcription Using CNNs and Dynamic Programming
Isaac Neri Gomez-Sarmiento
Faez Amjed Mezdari
Mirco Ravanaelli
Yusuf Cem Sübakan
Explaining Network Decision Provides Insights on the Causal Interaction Between Brain Regions in a Motor Imagery Task
Mirco Ravanaelli