Publications

Integrating equity, diversity, and inclusion throughout the lifecycle of artificial intelligence for healthcare: a scoping review
Elham Emami
Dana Jafarpour
Raymond Tolentino
Genevieve Gore
The lack of Equity, Diversity, and Inclusion (EDI) principles in the lifecycle of Artificial Intelligence (AI) technologies in healthcare is… (see more) a growing concern. Despite its importance, there is still a gap in understanding the initiatives undertaken to address this issue. This review aims to explore what and how EDI principles have been integrated into the design, development, and implementation of AI studies in healthcare. We followed the scoping review framework by Levac et al. and the Joanna Briggs Institute. A comprehensive search was conducted until April 29, 2022, across MEDLINE, Embase, PsycInfo, Scopus, and SCI-EXPANDED. Only research studies in which the integration of EDI in AI was the primary focus were included. Non-research articles were excluded. Two independent reviewers screened the abstracts and full texts, resolving disagreements by consensus or by consulting a third reviewer. To synthesize the findings, we conducted a thematic analysis and used a narrative description. We adhered to the PRISMA-ScR checklist for reporting scoping reviews. The search yielded 10,664 records, with 42 studies included. Most studies were conducted on the American population. Previous research has shown that AI models improve when socio-demographic factors such as gender and race are considered. Despite frameworks for EDI integration, no comprehensive approach systematically applies EDI principles in AI model development. Additionally, the integration of EDI into the AI implementation phase remains under-explored, and the representation of EDI within AI teams has been overlooked. This review reports on what and how EDI principles have been integrated into the design, development, and implementation of AI technologies in healthcare. We used a thorough search strategy and rigorous methodology, though we acknowledge limitations such as language and publication bias. A comprehensive framework is needed to ensure that EDI principles are considered throughout the AI lifecycle. Future research could focus on strategies to reduce algorithmic bias, assess the long-term impact of EDI integration, and explore policy implications to ensure that AI technologies are ethical, responsible, and beneficial for all.
LLMs and Stack Overflow discussions: Reliability, impact, and challenges
Leuson Da Silva
Jordan Samhi
LLMs and Stack Overflow discussions: Reliability, impact, and challenges
Leuson Da Silva
Jordan Samhi
Mixed-integer Second-Order Cone Programming for Multi-period Scheduling of Flexible AC Transmission System Devices
Mohamad Charara
Martin De Montigny
Nivine Abou Daher
Model approximation in MDPs with unbounded per-step cost
Ashutosh Nayyar
Yi Ouyang
We consider the problem of designing a control policy for an infinite-horizon discounted cost Markov decision process …
A Modular Approach for Clinical SLMs Driven by Synthetic Data with Pre-Instruction Tuning, Model Merging, and Clinical-Tasks Alignment
Jean-Philippe Corbeil
Amin Dada
Jean-Michel Attendu
Asma Ben Abacha
Lucas Caccia
Franccois Beaulieu
Thomas Lin
Jens Kleesiek
Paul Vozila
High computation costs and latency of large language models such as GPT-4 have limited their deployment in clinical settings. Small language… (see more) models (SLMs) offer a cost-effective alternative, but their limited capacity requires biomedical domain adaptation, which remains challenging. An additional bottleneck is the unavailability and high sensitivity of clinical data. To address these challenges, we propose a novel framework for adapting SLMs into high-performing clinical models. We introduce the MediPhi collection of 3.8B-parameter SLMs developed with our novel framework: pre-instruction tuning of experts on relevant medical and clinical corpora (PMC, Medical Guideline, MedWiki, etc.), model merging, and clinical-tasks alignment. To cover most clinical tasks, we extended the CLUE benchmark to CLUE+, doubling its size. Our expert models deliver relative improvements on this benchmark over the base model without any task-specific fine-tuning: 64.3% on medical entities, 49.5% on radiology reports, and 44% on ICD-10 coding (outperforming GPT-4-0125 by 14%). We unify the expert models into MediPhi via model merging, preserving gains across benchmarks. Furthermore, we built the MediFlow collection, a synthetic dataset of 2.5 million high-quality instructions on 14 medical NLP tasks, 98 fine-grained document types, and JSON format support. Alignment of MediPhi using supervised fine-tuning and direct preference optimization achieves further gains of 18.9% on average.
Modulation of leg trajectory by transcranial magnetic stimulation during walking
H. Bourgeois
Rose Guay-Hottin
El-Mehdi Meftah
Marina Martinez
D. Barthélemy
Multi-Armed Sampling Problem and the End of Exploration
This paper introduces the framework of multi-armed sampling, as the sampling counterpart to the optimization problem of multi-arm bandits. O… (see more)ur primary motivation is to rigorously examine the exploration-exploitation trade-off in the context of sampling. We systematically define plausible notions of regret for this framework and establish corresponding lower bounds. We then propose a simple algorithm that achieves these optimal regret bounds. Our theoretical results demonstrate that in contrast to optimization, sampling does not require exploration. To further connect our findings with those of multi-armed bandits, we define a continuous family of problems and associated regret measures that smoothly interpolates and unifies multi-armed sampling and multi-armed bandit problems using a temperature parameter. We believe the multi-armed sampling framework, and our findings in this setting can have a foundational role in the study of sampling including recent neural samplers, akin to the role of multi-armed bandits in reinforcement learning. In particular, our work sheds light on the need for exploration and the convergence properties of algorithm for entropy-regularized reinforcement learning, fine-tuning of pretrained models and reinforcement learning with human feedback (RLHF).
Multiscale Neural PDE Surrogates for Prediction and Downscaling: Application to Ocean Currents
Abdessamad El-Kabid
Redouane Lguensat
Alex Hernandez-Garcia
Accurate modeling of physical systems governed by partial differential equations is a central challenge in scientific computing. In oceanogr… (see more)aphy, high-resolution current data are critical for coastal management, environmental monitoring, and maritime safety. However, available satellite products, such as Copernicus data for sea water velocity at ~0.08 degrees spatial resolution and global ocean models, often lack the spatial granularity required for detailed local analyses. In this work, we (a) introduce a supervised deep learning framework based on neural operators for solving PDEs and providing arbitrary resolution solutions, and (b) propose downscaling models with an application to Copernicus ocean current data. Additionally, our method can model surrogate PDEs and predict solutions at arbitrary resolution, regardless of the input resolution. We evaluated our model on real-world Copernicus ocean current data and synthetic Navier-Stokes simulation datasets.
Optimizers Qualitatively Alter Solutions And We Should Leverage This
Clare Lyle
Ionut-Vlad Modoranu
Naima Elosegui Borras
Dan Alistarh
Petar Veličković
Soham De
James Martens
Due to the nonlinear nature of Deep Neural Networks (DNNs), one can not guarantee convergence to a unique global minimum of the loss when us… (see more)ing optimizers relying only on local information, such as SGD. Indeed, this was a primary source of skepticism regarding the feasibility of DNNs in the early days of the field. The past decades of progress in deep learning have revealed this skepticism to be misplaced, and a large body of empirical evidence shows that sufficiently large DNNs following standard training protocols exhibit well-behaved optimization dynamics that converge to performant solutions. This success has biased the community to use convex optimization as a mental model for learning, leading to a focus on training efficiency, either in terms of required iteration, FLOPs or wall-clock time, when improving optimizers. We argue that, while this perspective has proven extremely fruitful, another perspective specific to DNNs has received considerably less attention: the optimizer not only influences the rate of convergence, but also the qualitative properties of the learned solutions. Restated, the optimizer can and will encode inductive biases and change the effective expressivity of a given class of models. Furthermore, we believe the optimizer can be an effective way of encoding desiderata in the learning process. We contend that the community should aim at understanding the biases of already existing methods, as well as aim to build new optimizers with the explicit intent of inducing certain properties of the solution, rather than solely judging them based on their convergence rates. We hope our arguments will inspire research to improve our understanding of how the learning process can impact the type of solution we converge to, and lead to a greater recognition of optimizers design as a critical lever that complements the roles of architecture and data in shaping model outcomes.
Parsing Autism Heterogeneity: Transcriptomic Subgrouping of Imaging-Derived Phenotypes in Autism.
Johanna Leyhausen
Caroline Gurr
Lisa M. Berg
Hanna Seelemeyer
Bassem Hermila
Tim Schäfer
Andreas Chiocchetti
Charlotte M. Pretzsch
Eva Loth
Beth Oakley
Jan K. Buitelaar
Christian Beckmann
Tony Charman
Thomas Bourgeron
Eli Barthome
Tobias Banaschewski
Jumana Ahmad
Sara Ambrosino
Bonnie Auyeung
Simon Baron-Cohen … (see 56 more)
Sarah Baumeister
Sven Bölte
Carsten Bours
Michael Brammer
Daniel Brandeis
Claudia Brogna
Yvette de Bruijn
Bhismadev Chakrabarti
Ineke Cornelissen
Daisy Crawley
Flavio Dell’Acqua
Sarah Durston
Christine Ecker
Jessica Faulkner
Vincent Frouin
Pilar Garcés
David Goyard
Lindsay Ham
Hannah Hayward
Joerg F. Hipp
Rosemary Holt
Mark Johnson
Emily J. H. Jones
Prantik Kundu
Meng-Chuan Lai
Xavier Liogier D’ardhuy
Michael V. Lombardo
David J. Lythgoe
René Mandl
Andre Marquand
Luke Mason
Maarten Mennes
Andreas Meyer-Lindenberg
Carolin Moessnang
Nico Bast
Larry O’Dwyer
Marianne Oldehinkel
Bob Oranje
Gahan Pandina
Antonio Persico
Barbara Ruggeri
Amber N. V. Ruigrok
Jessica Sabet
Roberto Sacco
Antonia San José Cáceres
Emily Simonoff
Will Spooren
Julian Tillmann
Roberto Toro
Heike Tost
Jack Waldman
Steve C. R. Williams
Caroline Wooldridge
Marcel P. Zwiers
Declan Murphy
Perpetua: Multi-Hypothesis Persistence Modeling for Semi-Static Environments
Miguel Saavedra-Ruiz
Samer B. Nashed
Many robotic systems require extended deployments in complex, dynamic environments. In such deployments, parts of the environment may change… (see more) between subsequent robot observations. Most robotic mapping or environment modeling algorithms are incapable of representing dynamic features in a way that enables predicting their future state. Instead, they opt to filter certain state observations, either by removing them or some form of weighted averaging. This paper introduces Perpetua, a method for modeling the dynamics of semi-static features. Perpetua is able to: incorporate prior knowledge about the dynamics of the feature if it exists, track multiple hypotheses, and adapt over time to enable predicting of future feature states. Specifically, we chain together mixtures of"persistence"and"emergence"filters to model the probability that features will disappear or reappear in a formal Bayesian framework. The approach is an efficient, scalable, general, and robust method for estimating the states of features in an environment, both in the present as well as at arbitrary future times. Through experiments on simulated and real-world data, we find that Perpetua yields better accuracy than similar approaches while also being online adaptable and robust to missing observations.