TRAIL: Responsible AI for Professionals and Leaders
Learn how to integrate responsible AI practices into your organization with TRAIL. Join our information session on March 12, where you’ll discover the program in detail and have the chance to ask all your questions.
Learn how to leverage generative AI to support and improve your productivity at work. The next cohort will take place online on April 28 and 30, 2026, in French.
We use cookies to analyze the browsing and usage of our website and to personalize your experience. You can disable these technologies at any time, but this may limit certain functionalities of the site. Read our Privacy Policy for more information.
Setting cookies
You can enable and disable the types of cookies you wish to accept. However certain choices you make could affect the services offered on our sites (e.g. suggestions, personalised ads, etc.).
Essential cookies
These cookies are necessary for the operation of the site and cannot be deactivated. (Still active)
Analytics cookies
Do you accept the use of cookies to measure the audience of our sites?
Multimedia Player
Do you accept the use of cookies to display and allow you to watch the video content hosted by our partners (YouTube, etc.)?
Publications
Accelerated green material and solvent discovery with chemistry- and physics-guided generative AI
To test whether the mean curvature of isophotes (MCI), a geometric image transformation, can be used to improve automatic detection on chest… (see more) CT of Usual Interstitial Pneumonia (UIP), a determining radiological pattern in the diagnosis of Interstitial Lung Diseases (ILD).
This retrospective study included chest CT scans from 234 patients (123 female,111 male; mean age: 61.6 years; age range: 18-90 years) obtained at two independent institutions between 2007 and 2024.
Three different classification models were trained on the original CT images and separately on MCI-transformed CT images: (1) a previously published deep learning model for classifying fibrotic lung disease on chest CT, (2) a classification pipeline based on the EfficientNet-V2 convolutional neural network architecture, and (3) a non-deep-learning model based on the functional principal component analysis (FPCA) of density functions of voxel intensity.
All models were trained on data from the first institution and evaluated on data from the second institution with the recall-macro, precision-macro and F1-macro scores. Performance difference between classifier pairs was tested with the Stuart-Maxwell marginal homogeneity test.
For a fixed model architecture and training algorithm, MCI-transformed images yield comparable or better classification performance than the original CT images. The best performance improvement achieved with MCI compared to CT was: recall-macro 0.83 vs 0.57, precision-macro 0.81 vs 0.50, F1-macro 0.80 vs 0.49, p=4.2e-5.
MCI may be a valuable addition to existing AI systems for screening for UIP on chest CT.
Machine learning methods for identifying usual interstitial pneumonia on chest CT perform better when the input CT images are transformed via the mean curvature of isophotes (MCI), a geometric transformation method known from classical computer vision.
Three machine learning models were trained on a dataset of 158 patients from one institution and tested on another dataset of 76 patients from an independent institution to discriminate for usual interstitial pneumonia (UIP) on chest CT in a 3-group classification task.
When keeping the network architecture and parameters fixed, changing the input image domain from the original CT to MCI-transformed images improved classification performance (Stuart-Maxwell test, p < 5e-3)
MCI may be a valuable addition to existing machine learning systems for screening for UIP on chest CT, whether based on deep learning or on simpler shallow classifiers.
Decision-making problems often feature uncertainty stemming from heterogeneous and context-dependent human preferences. To address this, we … (see more)propose a sequential learning-and-optimization pipeline to learn preference distributions and leverage them to solve downstream problems, for example risk-averse formulations. We focus on human choice settings that can be formulated as (integer) linear programs. In such settings, existing inverse optimization and choice modelling methods infer preferences from observed choices but typically produce point estimates or fail to capture contextual shifts, making them unsuitable for risk-averse decision-making. Using a bounded-variance score function gradient estimator, we train a predictive model mapping contextual features to a rich class of parameterizable distributions. This approach yields a maximum likelihood estimate. The model generates scenarios for unseen contexts in the subsequent optimization phase. In a synthetic ridesharing environment, our approach reduces average post-decision surprise by up to 114
The proliferation of agent benchmarks has created critical fragmentation that threatens research productivity. Each new benchmark requires s… (see more)ubstantial custom integration, creating an "integration tax" that limits comprehensive evaluation. We propose CUBE (Common Unified Benchmark Environments), a universal protocol standard built on MCP and Gym that allows benchmarks to be wrapped once and used everywhere. By separating task, benchmark, package, and registry concerns into distinct API layers, CUBE enables any compliant platform to access any compliant benchmark for evaluation, RL training, or data generation without custom integration. We call on the community to contribute to the development of this standard before platform-specific implementations deepen fragmentation as benchmark production accelerates through 2026.
We propose TacFiLM, a lightweight modality-fusion approach that integrates visual-tactile signals into vision-language-action (VLA) models. … (see more)While recent advances in VLA models have introduced robot policies that are both generalizable and semantically grounded, these models mainly rely on vision-based perception. Vision alone, however, cannot capture the complex interaction dynamics that occur during contact-rich manipulation, including contact forces, surface friction, compliance, and shear. While recent attempts to integrate tactile signals into VLA models often increase complexity through token concatenation or large-scale pretraining, the heavy computational demands of behavioural models necessitate more lightweight fusion strategies. To address these challenges, TacFiLM outlines a post-training finetuning approach that conditions intermediate visual features on pretrained tactile representations using feature-wise linear modulation (FiLM). Experimental results on insertion tasks demonstrate consistent improvements in success rate, direct insertion performance, completion time, and force stability across both in-distribution and out-of-distribution tasks. Together, these results support our method as an effective approach to integrating tactile signals into VLA models, improving contact-rich manipulation behaviours.
Species distribution models (SDMs), which aim to predict species occurrence based on environmental variables, are widely used to monitor and… (see more) respond to biodiversity change. Recent deep learning advances for SDMs have been shown to perform well on complex and heterogeneous datasets, but their effectiveness remains limited by spatial biases in the data. In this paper, we revisit deep SDMs from a Bayesian perspective and introduce BATIS, a novel and practical framework wherein prior predictions are updated iteratively using limited observational data. Models must appropriately capture both aleatoric and epistemic uncertainty to effectively combine fine-grained local insights with broader ecological patterns. We benchmark an extensive set of uncertainty quantification approaches on a novel dataset including citizen science observations from the eBird platform. Our empirical study shows how Bayesian deep learning approaches can greatly improve the reliability of SDMs in data-scarce locations, which can contribute to ecological understanding and conservation efforts.
2026-03-13
AAAI Conference on Artificial Intelligence (published)
Unsupervised proteomic analysis identified biologically coherent endotypes that advance understanding of acute lung injury in COVID‑19 and… (see more) support improved diagnostic and prognostic strategies.
We investigate the capacity of Large Language Models (LLMs) for imaginative reasoning—the proactive construction, testing, and revision of… (see more) hypotheses in information-sparse environments. Existing benchmarks, often static or focused on social deduction, fail to capture the dynamic, exploratory nature of this reasoning process. To address this gap, we introduce a comprehensive research framework based on the classic "Turtle Soup" game, integrating a benchmark, an agent, and an evaluation protocol. We present TurtleSoup-Bench, the first large-scale, bilingual, interactive benchmark for imaginative reasoning, comprising 800 turtle soup stories sourced from both the Internet and expert authors. We also propose Mosaic-Agent, a novel agent designed to assess LLMs' performance in this setting. To evaluate reasoning quality, we develop a multi-dimensional protocol measuring logical consistency, detail completion, and conclusion alignment. Experiments with leading LLMs reveal clear capability limits, common failure patterns, and a significant performance gap compared to humans. Our work offers new insights into LLMs' imaginative reasoning and establishes a foundation for future research on exploratory agent behavior.
2026-03-13
AAAI Conference on Artificial Intelligence (published)
Animal brains flexibly and efficiently achieve many behavioral tasks with a single neural network. A core goal in modern neuroscience is to … (see more)map the mechanisms of the brain's flexibility onto the dynamics underlying neural populations. However, identifying task-specific dynamical rules from limited, noisy, and high-dimensional experimental neural recordings remains a major challenge, as experimental data often provide only partial access to brain states and dynamical mechanisms. While recurrent neural networks (RNNs) directly constrained neural data have been effective in inferring underlying dynamical mechanisms, they are typically limited to single-task domains and struggle to generalize across behavioral conditions. Here, we introduce JEDI, a hierarchical model that captures neural dynamics across tasks and contexts by learning a shared embedding space over RNN weights. This model recapitulates individual samples of neural dynamics while scaling to arbitrarily large and complex datasets, uncovering shared structure across conditions in a single, unified model. Using simulated RNN datasets, we demonstrate that JEDI accurately learns robust, generalizable, condition-specific embeddings. By reverse-engineering the weights learned by JEDI, we show that it recovers ground truth fixed point structures and unveils key features of the underlying neural dynamics in the eigenspectra. Finally, we apply JEDI to motor cortex recordings during monkey reaching to extract mechanistic insight into the neural dynamics of motor control. Our work shows that joint learning of contextual embeddings and recurrent weights provides scalable and generalizable inference of brain dynamics from recordings alone.