Publications

FairFLRep: Fairness aware fault localization and repair of Deep Neural Networks
Moses Openja
Paolo Arcaini
Fuyuki Ishikawa
EPISeg: Automated segmentation of the spinal cord on echo planar images using open-access multi-center data
Merve Kaptan
Alexandra Tinnermann
Ali Khatibi
Alice Dabbagh
Christian Büchel
Christian W. Kündig
Csw Law
Dario Pfyffer
David J. Lythgoe
Dimitra Tsivaka
Dimitri Van De Ville
Falk Eippert
Fauziyya Muhammad
Gary H. Glover
Gergely David
Grace Haynes
Jan Haaker
Jonathan C. W. Brooks
Jürgen Finsterbusch … (see 21 more)
Katherine T. Martucci
Kimberly J. Hemmerling
Mahdi Mobarak-Abadi
Mark A. Hoggarth
Matthew A. Howard
Molly G. Bright
Nawal Kinany
Olivia S. Kowalczyk
Patrick Freund
Robert L. Barry
Sean Mackey
Shahabeddin Vahdat
Simon Schading
Stephen B. McMahon
Todd Parish
Véronique Marchand-Pauvert
Yufen Chen
Zachary A. Smith
Kenneth A. Weber
Benjamin De Leener
Abstract Functional magnetic resonance imaging (fMRI) of the spinal cord is relevant for studying sensation, movement, and autonomic functio… (see more)n. Preprocessing of spinal cord fMRI data involves segmentation of the spinal cord on gradient-echo echo planar imaging (EPI) images. Current automated segmentation methods do not work well on these data, due to the low spatial resolution, susceptibility artifacts causing distortions and signal drop-out, ghosting, and motion-related artifacts. Consequently, this segmentation task demands a considerable amount of manual effort which takes time and is prone to user bias. In this work, we (i) gathered a multi-center dataset of spinal cord gradient-echo EPI with ground-truth segmentations and shared it on OpenNeuro https://openneuro.org/datasets/ds005143/versions/1.3.1 and (ii) developed a deep learning-based model, EPISeg, for the automatic segmentation of the spinal cord on gradient-echo EPI data. We observe a significant improvement in terms of segmentation quality compared with other available spinal cord segmentation models. Our model is resilient to different acquisition protocols as well as commonly observed artifacts in fMRI data. The training code is available at https://github.com/sct-pipeline/fmri-segmentation/, and the model has been integrated into the Spinal Cord Toolbox as a command-line tool.
EPISeg: Automated segmentation of the spinal cord on echo planar images using open-access multi-center data
Merve Kaptan
Alexandra Tinnermann
Ali Khatibi
Alice Dabbagh
Christian Büchel
Christian W. Kündig
Christine S. W. Law
Csw Law
Dario Pfyffer
David J. Lythgoe
Dimitra Tsivaka
Dimitri Van De Ville
Falk Eippert
Fauziyya Muhammad
Gary H. Glover
Gergely David
Grace Haynes
Jan Haaker
Jonathan C. W. Brooks … (see 23 more)
Jürgen Finsterbusch
Katherine T. Martucci
Kimberly J. Hemmerling
Mahdi Mobarak-Abadi
Mark A. Hoggarth
Matthew A. Howard
Molly G. Bright
Nawal Kinany
Olivia S. Kowalczyk
Patrick Freund
Robert L. Barry
Sean Mackey
Shahabeddin Vahdat
Simon Schading
Stephen B. McMahon
Todd Parish
Véronique Marchand-Pauvert
Yufen Chen
Zachary A. Smith
KA Weber
Kenneth A. Weber
Benjamin De Leener
Functional magnetic resonance imaging (fMRI) of the spinal cord is relevant for studying sensation, movement, and autonomic function. Prepro… (see more)cessing of spinal cord fMRI data involves segmentation of the spinal cord on gradient-echo echo planar imaging (EPI) images. Current automated segmentation methods do not work well on these data, due to the low spatial resolution, susceptibility artifacts causing distortions and signal drop-out, ghosting, and motion-related artifacts. Consequently, this segmentation task demands a considerable amount of manual effort which takes time and is prone to user bias. In this work, we (i) gathered a multi-center dataset of spinal cord gradient-echo EPI with ground-truth segmentations and shared it on OpenNeuro https://openneuro.org/datasets/ds005143/versions/1.3.0, and (ii) developed a deep learning-based model, EPISeg, for the automatic segmentation of the spinal cord on gradient-echo EPI data. We observe a significant improvement in terms of segmentation quality compared to other available spinal cord segmentation models. Our model is resilient to different acquisition protocols as well as commonly observed artifacts in fMRI data. The training code is available at https://github.com/sct-pipeline/fmri-segmentation/, and the model has been integrated into the Spinal Cord Toolbox as a command-line tool.
RL Fine-Tuning Heals OOD Forgetting in SFT
Hangzhan Jin
Sicheng Lyu
Mohammad Hamdaqa
The two-stage fine-tuning paradigm of Supervised Fine-Tuning (SFT) followed by Reinforcement Learning (RL) has empirically shown better reas… (see more)oning performance than one-stage SFT for the post-training of Large Language Models (LLMs). However, the evolution and mechanism behind the synergy of SFT and RL are still under-explored and inconclusive. In our study, we find the well-known claim"SFT memorizes, RL generalizes"is over-simplified, and discover that: (1) OOD performance peaks at the early stage of SFT and then declines (OOD forgetting), the best SFT checkpoint cannot be captured by training/test loss; (2) the subsequent RL stage does not generate fundamentally better OOD capability, instead it plays an \textbf{OOD restoration} role, recovering the lost reasoning ability during SFT; (3) The recovery ability has boundaries, \ie{} \textbf{if SFT trains for too short or too long, RL cannot recover the lost OOD ability;} (4) To uncover the underlying mechanisms behind the forgetting and restoration process, we employ SVD analysis on parameter matrices, manually edit them, and observe their impacts on model performance. Unlike the common belief that the shift of model capacity mainly results from the changes of singular values, we find that they are actually quite stable throughout fine-tuning. Instead, the OOD behavior strongly correlates with the \textbf{rotation of singular vectors}. Our findings re-identify the roles of SFT and RL in the two-stage fine-tuning and discover the rotation of singular vectors as the key mechanism. %reversing the rotations induced by SFT, which shows recovery from forgetting, whereas imposing the SFT parameter directions onto a RL-tuned model results in performance degradation. Code is available at https://github.com/xiaodanguoguo/RL_Heals_SFT
RL Fine-Tuning Heals OOD Forgetting in SFT
Hangzhan Jin
Sicheng Lyu
Mohammad Hamdaqa
The two-stage fine-tuning paradigm of Supervised Fine-Tuning (SFT) followed by Reinforcement Learning (RL) has empirically shown better reas… (see more)oning performance than one-stage SFT for the post-training of Large Language Models (LLMs). However, the evolution and mechanism behind the synergy of SFT and RL are still under-explored and inconclusive. In our study, we find the well-known claim"SFT memorizes, RL generalizes"is over-simplified, and discover that: (1) OOD performance peaks at the early stage of SFT and then declines (OOD forgetting), the best SFT checkpoint cannot be captured by training/test loss; (2) the subsequent RL stage does not generate fundamentally better OOD capability, instead it plays an \textbf{OOD restoration} role, recovering the lost reasoning ability during SFT; (3) The recovery ability has boundaries, \ie{} \textbf{if SFT trains for too short or too long, RL cannot recover the lost OOD ability;} (4) To uncover the underlying mechanisms behind the forgetting and restoration process, we employ SVD analysis on parameter matrices, manually edit them, and observe their impacts on model performance. Unlike the common belief that the shift of model capacity mainly results from the changes of singular values, we find that they are actually quite stable throughout fine-tuning. Instead, the OOD behavior strongly correlates with the \textbf{rotation of singular vectors}. Our findings re-identify the roles of SFT and RL in the two-stage fine-tuning and discover the rotation of singular vectors as the key mechanism. %reversing the rotations induced by SFT, which shows recovery from forgetting, whereas imposing the SFT parameter directions onto a RL-tuned model results in performance degradation. Code is available at https://github.com/xiaodanguoguo/RL_Heals_SFT
An AI system to help scientists write expert-level empirical software
Eser Aygün
Gheorghe Comanici
Marc Coram
Hao Cui
Jake Garrison
Renee Johnston Anton Kast
Cory Y. McLean
Peter C. Norgaard
Zahra Shamsi
David Smalling
James Thompson
Subhashini Venugopalan
Brian P Williams
Chujun He
Sarah Martinson
Martyna Plomecka
Lai Wei
Yuchen Zhou
Qian-Ze Zhu … (see 21 more)
Matthew Abraham
Erica Brand
Anna Bulanova
Chris Co
Scott Ellsworth
Grace Joseph
Malcolm Kane
Ryan K. Krueger
Johan Kartiwa
D. Liebling
Jan-Matthis Lueckmann
Paul Raccuglia
Xuefei Wang
Katherine Chou
James Manyika
Yossi Matias
J.C. Platt
Lizzie Dorfman
Shibl Mourad
Michael P. Brenner
The cycle of scientific discovery is frequently bottlenecked by the slow, manual creation of software to support computational experiments. … (see more)To address this, we present an AI system that creates expert-level scientific software whose goal is to maximize a quality metric. The system uses a Large Language Model (LLM) and Tree Search (TS) to systematically improve the quality metric and intelligently navigate the large space of possible solutions. The system achieves expert-level results when it explores and integrates complex research ideas from external sources. The effectiveness of tree search is demonstrated across a wide range of benchmarks. In bioinformatics, it discovered 40 novel methods for single-cell data analysis that outperformed the top human-developed methods on a public leaderboard. In epidemiology, it generated 14 models that outperformed the CDC ensemble and all other individual models for forecasting COVID-19 hospitalizations. Our method also produced state-of-the-art software for geospatial analysis, neural activity prediction in zebrafish, time series forecasting and numerical solution of integrals. By devising and implementing novel solutions to diverse tasks, the system represents a significant step towards accelerating scientific progress.
An AI system to help scientists write expert-level empirical software
Eser Aygün
Gheorghe Comanici
Marc Coram
Hao Cui
Jake Garrison
Renee Johnston Anton Kast
Cory Y. McLean
Peter C. Norgaard
Zahra Shamsi
David Smalling
James Thompson
Subhashini Venugopalan
Brian P Williams
Chujun He
Sarah Martinson
Martyna Plomecka
Lai Wei
Yuchen Zhou
Qian-Ze Zhu … (see 21 more)
Matthew Abraham
Erica Brand
Anna Bulanova
Chris Co
Scott Ellsworth
Grace Joseph
Malcolm Kane
Ryan K. Krueger
Johan Kartiwa
Daniel J. Liebling
Jan-Matthis Lueckmann
Paul Raccuglia
Xuefei Wang
Katherine Chou
James Manyika
Yossi Matias
J.C. Platt
Lizzie Dorfman
Shibl Mourad
Michael P. Brenner
The cycle of scientific discovery is frequently bottlenecked by the slow, manual creation of software to support computational experiments. … (see more)To address this, we present an AI system that creates expert-level scientific software whose goal is to maximize a quality metric. The system uses a Large Language Model (LLM) and Tree Search (TS) to systematically improve the quality metric and intelligently navigate the large space of possible solutions. The system achieves expert-level results when it explores and integrates complex research ideas from external sources. The effectiveness of tree search is demonstrated across a wide range of benchmarks. In bioinformatics, it discovered 40 novel methods for single-cell data analysis that outperformed the top human-developed methods on a public leaderboard. In epidemiology, it generated 14 models that outperformed the CDC ensemble and all other individual models for forecasting COVID-19 hospitalizations. Our method also produced state-of-the-art software for geospatial analysis, neural activity prediction in zebrafish, time series forecasting and numerical solution of integrals. By devising and implementing novel solutions to diverse tasks, the system represents a significant step towards accelerating scientific progress.
An AI system to help scientists write expert-level empirical software
Eser Aygün
Gheorghe Comanici
Marc Coram
Hao Cui
Jake Garrison
Renee Johnston Anton Kast
Cory Y. McLean
Peter C. Norgaard
Zahra Shamsi
David Smalling
James Thompson
Subhashini Venugopalan
Brian P Williams
Chujun He
Sarah Martinson
Martyna Plomecka
Lai Wei
Yuchen Zhou
Qian-Ze Zhu … (see 21 more)
Matthew Abraham
Erica Brand
Anna Bulanova
Chris Co
Scott Ellsworth
Grace Joseph
Malcolm Kane
Ryan K. Krueger
Johan Kartiwa
D. Liebling
Jan-Matthis Lueckmann
Paul Raccuglia
Xuefei Wang
Katherine Chou
James Manyika
Yossi Matias
J.C. Platt
Lizzie Dorfman
Shibl Mourad
Michael P. Brenner
The cycle of scientific discovery is frequently bottlenecked by the slow, manual creation of software to support computational experiments. … (see more)To address this, we present an AI system that creates expert-level scientific software whose goal is to maximize a quality metric. The system uses a Large Language Model (LLM) and Tree Search (TS) to systematically improve the quality metric and intelligently navigate the large space of possible solutions. The system achieves expert-level results when it explores and integrates complex research ideas from external sources. The effectiveness of tree search is demonstrated across a wide range of benchmarks. In bioinformatics, it discovered 40 novel methods for single-cell data analysis that outperformed the top human-developed methods on a public leaderboard. In epidemiology, it generated 14 models that outperformed the CDC ensemble and all other individual models for forecasting COVID-19 hospitalizations. Our method also produced state-of-the-art software for geospatial analysis, neural activity prediction in zebrafish, time series forecasting and numerical solution of integrals. By devising and implementing novel solutions to diverse tasks, the system represents a significant step towards accelerating scientific progress.
An AI system to help scientists write expert-level empirical software
Eser Aygün
Gheorghe Comanici
Marc Coram
Hao Cui
Jake Garrison
Renee Johnston Anton Kast
Cory Y. McLean
Peter C. Norgaard
Zahra Shamsi
David Smalling
James Thompson
Subhashini Venugopalan
Brian P Williams
Chujun He
Sarah Martinson
Martyna Plomecka
Lai Wei
Yuchen Zhou
Qian-Ze Zhu … (see 21 more)
Matthew Abraham
Erica Brand
Anna Bulanova
Chris Co
Scott Ellsworth
Grace Joseph
Malcolm Kane
Ryan K. Krueger
Johan Kartiwa
D. Liebling
Jan-Matthis Lueckmann
Paul Raccuglia
Xuefei Wang
Katherine Chou
James Manyika
Yossi Matias
J.C. Platt
Lizzie Dorfman
Shibl Mourad
Michael P. Brenner
The cycle of scientific discovery is frequently bottlenecked by the slow, manual creation of software to support computational experiments. … (see more)To address this, we present an AI system that creates expert-level scientific software whose goal is to maximize a quality metric. The system uses a Large Language Model (LLM) and Tree Search (TS) to systematically improve the quality metric and intelligently navigate the large space of possible solutions. The system achieves expert-level results when it explores and integrates complex research ideas from external sources. The effectiveness of tree search is demonstrated across a wide range of benchmarks. In bioinformatics, it discovered 40 novel methods for single-cell data analysis that outperformed the top human-developed methods on a public leaderboard. In epidemiology, it generated 14 models that outperformed the CDC ensemble and all other individual models for forecasting COVID-19 hospitalizations. Our method also produced state-of-the-art software for geospatial analysis, neural activity prediction in zebrafish, time series forecasting and numerical solution of integrals. By devising and implementing novel solutions to diverse tasks, the system represents a significant step towards accelerating scientific progress.
Imagining Alternatives: Towards High-Resolution 3D Counterfactual Medical Image Generation via Language Guidance
Vision-language models have demonstrated impressive capabilities in generating 2D images under various conditions; however, the success of t… (see more)hese models is largely enabled by extensive, readily available pretrained foundation models. Critically, comparable pretrained models do not exist for 3D, significantly limiting progress. As a result, the potential of vision-language models to produce high-resolution 3D counterfactual medical images conditioned solely on natural language remains unexplored. Addressing this gap would enable powerful clinical and research applications, such as personalized counterfactual explanations, simulation of disease progression, and enhanced medical training by visualizing hypothetical conditions in realistic detail. Our work takes a step toward this challenge by introducing a framework capable of generating high-resolution 3D counterfactual medical images of synthesized patients guided by free-form language prompts. We adapt state-of-the-art 3D diffusion models with enhancements from Simple Diffusion and incorporate augmented conditioning to improve text alignment and image quality. To our knowledge, this is the first demonstration of a language-guided native-3D diffusion model applied to neurological imaging, where faithful three-dimensional modeling is essential. On two neurological MRI datasets, our framework simulates varying counterfactual lesion loads in Multiple Sclerosis and cognitive states in Alzheimer's disease, generating high-quality images while preserving subject fidelity. Our results lay the groundwork for prompt-driven disease progression analysis in 3D medical imaging. Project link - https://lesupermomo.github.io/imagining-alternatives/.
Imagining Alternatives: Towards High-Resolution 3D Counterfactual Medical Image Generation via Language Guidance
Vision-language models have demonstrated impressive capabilities in generating 2D images under various conditions; however, the success of t… (see more)hese models is largely enabled by extensive, readily available pretrained foundation models. Critically, comparable pretrained models do not exist for 3D, significantly limiting progress. As a result, the potential of vision-language models to produce high-resolution 3D counterfactual medical images conditioned solely on natural language remains unexplored. Addressing this gap would enable powerful clinical and research applications, such as personalized counterfactual explanations, simulation of disease progression, and enhanced medical training by visualizing hypothetical conditions in realistic detail. Our work takes a step toward this challenge by introducing a framework capable of generating high-resolution 3D counterfactual medical images of synthesized patients guided by free-form language prompts. We adapt state-of-the-art 3D diffusion models with enhancements from Simple Diffusion and incorporate augmented conditioning to improve text alignment and image quality. To our knowledge, this is the first demonstration of a language-guided native-3D diffusion model applied to neurological imaging, where faithful three-dimensional modeling is essential. On two neurological MRI datasets, our framework simulates varying counterfactual lesion loads in Multiple Sclerosis and cognitive states in Alzheimer's disease, generating high-quality images while preserving subject fidelity. Our results lay the groundwork for prompt-driven disease progression analysis in 3D medical imaging. Project link - https://lesupermomo.github.io/imagining-alternatives/.
Task Robustness via Re-Labelling Vision-Action Robot Data