Publications

Street review: A participatory AI-based framework for assessing streetscape inclusivity
Shin Koseki
Urban centers undergo social, demographic, and cultural changes that shape public street use and require systematic evaluation of public spa… (voir plus)ces. This study presents Street Review, a mixed-methods approach that combines participatory research with AI-based analysis to assess streetscape inclusivity. In Montréal, Canada, 28 residents participated in semi-directed interviews and image evaluations, supported by the analysis of approximately 45,000 street-view images from Mapillary. The approach produced visual analytics, such as heatmaps, to correlate subjective user ratings with physical attributes like sidewalk, maintenance, greenery, and seating. Findings reveal variations in perceptions of inclusivity and accessibility across demographic groups, demonstrating that incorporating diverse user feedback can enhance machine learning models through careful data-labeling and co-production strategies. The Street Review framework offers a systematic method for urban planners and policy analysts to inform planning, policy development, and management of public streets.
Loss Smoothing for Continual Adaptation
Neural networks are often adapted in nonstationary data distributions settings where the objective is to optimize performance on the current… (voir plus) task, and preserving accuracy on previous tasks is not required. As a result, existing methods primarily focus on improving plasticity, while stability is largely studied in the context of continual learning. In this work, we examine whether preserving stability can also be beneficial in model adaptation settings where past-task performance is irrelevant. We propose a simple loss smoothing approach that encourages selective adaptation by preserving task-shared features while modifying task-inconsistent ones. We evaluate our method on continual supervised model adaptation benchmarks and reinforcement learning benchmarks, and show that promoting representational stability during adaptation can improve performance across settings.
Using virtual reality hypnosis during stem cell transplant for patients in hematology: A protocol for a feasibility randomized study
Audrey Laurin
Floriane Rousseaux
Isaiah Gitonga
Jean Roy
Mathieu Landry
Richard LeBlanc
Nadia Godin
Caroline Arbour
Philippe Richebé
Pierre Rainville
David Ogez
Valentyn Fournier
ClinicalTrials.gov NCT06817759.
Soil microbiome prediction using traditional machine learning and deep learning models
Zahia Aouabed
Vincent Therrien
Mohamed Achraf Bouaoune
Mohammadreza Bakhtyari
Mohamed Hijri
The accuracy of macrobiological community predictions largely depends on the taxonomic scale considered. Nowadays, the applicability of such… (voir plus) predictions remains an important challenge when extended to microbial soil communities. This is not only due to the lack of reliable benchmark data, but also to a greater diversity of the soil microorganisms compared to other environments. In this study, we use six traditional machine learning regression models and one deep learning regressor to predict relative frequencies of bacterial and fungal communities within the soil microbiome based on environmental factors. We analyze the data from two publicly available soil microbiome datasets: (1) Data collected by Averill and co-authors and analyzed in a recent Nature Ecology and Evolution article, and (2) Data extracted from the NEON database, to estimate the composition of bacterial and fungal communities at the functional (i.e. functional group level) and taxonomic scales (i.e. phylum, class, order, family, and genus levels). Our findings suggest the presence of a general pattern across the observed taxonomic scales according to which the predictability of the soil microbiome increases with taxonomic scale. However, a notable exception occurs when machine learning models are applied to predict bacterial communities at the functional group level for Averill et al.’s data when all of them fail to provide accurate predictions results. The best overall results obtained include the value of the coefficient of determination
Detoxifying LLMs via Representation Erasure-Based Preference Optimization
Large language models (LLMs) trained on webscale data can produce toxic outputs, raising concerns for safe deployment. Prior defenses, based… (voir plus) on applications of DPO, NPO, and similar algorithms, reduce the likelihood of harmful continuations, but not robustly so: they are vulnerable to adversarial prompting and easily undone by fine-tuning-based relearning attacks. Indeed, research has shown that these edits to the model are superficial: linear probing reveals that harmful "directions" remain present in representations. To address this, we propose Representation Erasure-based Preference Optimization (REPO), reformulating detoxification as a token-level preference problem. Using a novel objective with preference data, we force the representations of toxic continuations to converge toward their benign counterparts. Our mechanistic analysis reveals that this granular approach is critical: unlike baselines, REPO induces deep, localized edits to toxicity-encoding neurons while preserving general model utility. Exhaustive evaluations show that REPO achieves state-of-the-art robustness, stopping sophisticated threats-including relearning attacks and enhanced GCG jailbreaks-where existing representation- and output-based methods fail.
Stable Deep Reinforcement Learning via Isotropic Gaussian Representations
Deep reinforcement learning systems often suffer from unstable training dynamics due to non-stationarity, where learning objectives and data… (voir plus) distributions evolve over time. We show that under non-stationary targets, isotropic Gaussian embeddings are provably advantageous. In particular, they induce stable tracking of time-varying targets for linear readouts, achieve maximal entropy under a fixed variance budget, and encourage a balanced use of all representational dimensions--all of which enable agents to be more adaptive and stable. Building on this insight, we propose the use of Sketched Isotropic Gaussian Regularization for shaping representations toward an isotropic Gaussian distribution during training. We demonstrate empirically, over a variety of domains, that this simple and computationally inexpensive method improves performance under non-stationarity while reducing representation collapse, neuron dormancy, and training instability.
GASS: Geometry-Aware Spherical Sampling for Disentangled Diversity Enhancement in Text-to-Image Generation
Ye Zhu
Kaleb S. Newman
Johannes F. Lutzeyer
Adriana Romero
Olga Russakovsky
Joint Rolling Stock and Crew Scheduling with Multi-train Composition in Urban Rail Networks
Entai Wang
Lixing Yang
Jean-François Cordeau
Ziyou Gao
Yossiri Adulyasak
Rolling stock scheduling and crew scheduling are two fundamental problems that arise in the planning of urban rail operations and that are e… (voir plus)specially important in the case of flexible operations in real-world networks. These problems are often solved separately and sequentially in different planning stages, resulting in limited options to adjust crew schedules after rolling stock decisions have been made. To better adjust these two decision-making processes and achieve better solutions, this paper studies a joint rolling stock and crew scheduling problem in urban rail networks. A novel optimization model is formulated with the aim of reducing the operational cost of rolling stock units and crew members. In addition, the multi-train composition mode is considered to adequately match different frequency requirements and rolling stock transport capacities. To solve the model, a customized branch-and-price-and-cut solution algorithm is proposed to find the optimal schedule schemes, in which Benders decomposition is used to solve the linear programming relaxation of the path-based reformulation. Two customized column generation methods with label correcting are embedded to solve the master problem and pricing subproblem for generating paths (columns) corresponding to rolling stock units and crew groups, respectively. Finally, a branch-and-bound procedure with several acceleration techniques is proposed to find integer solutions. To demonstrate the computational performance and the robustness of the proposed approaches, a series of numerical experiments are performed in real-world instances of the Beijing urban rail network under different settings. The computational results confirm the high efficiency of the solution methodology and the benefits of the flexible operation schemes based on the solutions found by the proposed methods. Funding: This work was supported by National Natural Science Foundation of China [Grants 72288101, 72322022, 72371015]. The first author sincerely thanks the China Scholarship Council for supporting his visiting PhD program [Grant 202407090173]. Supplemental Material: The electronic companion is available at https://doi.org/10.1287/trsc.2024.0905 .
Delta-Crosscoder: Robust Crosscoder Model Diffing in Narrow Fine-Tuning Regimes
Model diffing methods aim to identify how fine-tuning changes a model's internal representations. Crosscoders approach this by learning shar… (voir plus)ed dictionaries of interpretable latent directions between base and fine-tuned models. However, existing formulations struggle with narrow fine-tuning, where behavioral changes are localized and asymmetric. We introduce Delta-Crosscoder, which combines BatchTopK sparsity with a delta-based loss prioritizing directions that change between models, plus an implicit contrastive signal from paired activations on matched inputs. Evaluated across 10 model organisms, including synthetic false facts, emergent misalignment, subliminal learning, and taboo word guessing (Gemma, LLaMA, Qwen; 1B-9B parameters), Delta-Crosscoder reliably isolates latent directions causally responsible for fine-tuned behaviors and enables effective mitigation, outperforming SAE-based baselines, while matching the Non-SAE-based. Our results demonstrate that crosscoders remain a powerful tool for model diffing.
Fluid-Agent Reinforcement Learning
Theodore J. Perkins
The primary focus of multi-agent reinforcement learning (MARL) has been to study interactions among a fixed number of agents embedded in an … (voir plus)environment. However, in the real world, the number of agents is neither fixed nor known a priori. Moreover, an agent can decide to create other agents (for example, a cell may divide, or a company may spin off a division). In this paper, we propose a framework that allows agents to create other agents; we call this a fluid-agent environment. We present game-theoretic solution concepts for fluid-agent games and empirically evaluate the performance of several MARL algorithms within this framework. Our experiments include fluid variants of established benchmarks such as Predator-Prey and Level-Based Foraging, where agents can dynamically spawn, as well as a new environment we introduce that highlights how fluidity can unlock novel solution strategies beyond those observed in fixed-population settings. We demonstrate that this framework yields agent teams that adjust their size dynamically to match environmental demands.
A flaw in using pretrained protein language models in protein–protein interaction inference models
Navigating ternary doping in Li-ion cathodes with closed-loop multi-objective Bayesian optimization
Nooshin Zeinali Galabi
Cheng-Hao Liu
Marc Kamel
Shipeng Jia
Eric McCalla
To further improve secondary battery materials, we are increasingly exploring highly complex composition spaces in attempts to optimize mult… (voir plus)iple properties simultaneously. While our past work has done this in systematic manners using high-throughput experimentation, the exponential increase in the search space with triple doping makes grid search prohibitively expensive. Here, we demonstrate a closed-loop, multi-objective machine learning approach to guide the high-throughput workflow to efficiently navigate a space with approximately 14 million unique combinations. The test system is LiCoPO4 which we have previously explored using systematic codoping that was effective in optimizing one property only: energy density. To learn multiple electrochemical metrics, we first pretrain a set transformer on the public Materials Project database as a feature extractor, then attach a multi-task Gaussian process head and finetune the entire model on our high-throughput data. Through 3 rounds of active learning, we demonstrate that with a very small number of samples (as few as 125 random compositions and 63 predicted) we are able to simultaneously optimize four key electrochemical properties. Relative to the undoped system, the best composition raises our composite figure of merit by up to five times. This establishes an end-to-end workflow for accelerated battery materials design to be used in the rapidly growing field of autonomous materials discovery.