Publications

A graphical user interface for calculating the arterial input function during dynamic positron emission tomography
Y. Daoud
Liam Carroll
Purpose. Dynamic positron emission tomography (dPET) requires the acquisition of the arterial input function (AIF), conventionally obtained … (see more)via invasive arterial blood sampling. To obtain the AIF non-invasively, our group developed and combined two novel solutions consisting of (1) a detector, placed on a patient’s wrist during the PET scans to measure the radiation leaving the wrist and (2) a Geant4-based Monte Carlo simulation software. The simulations require patient-specific wrist geometry. The aim of this study was to develop a graphical user interface (GUI) allowing the user to import 2D ultrasound scans of a patient’s wrist, and measure the wrist features needed to calculate the AIF. Methods. The GUI elements were implemented using Qt5 and VTK-8.2.0. The user imports a patient’s wrist ultrasound scans, measures the radial artery and veins’ surface and depth to model a wrist phantom, then specifies the radioactive source used during the dPET scan. The phantom, the source, and the number of decay events are imported into the Geant4-based Monte Carlo software to run a simulation. In this study, 100 million decays of 18F and 68Ga were simulated in a wrist phantom designed based on an ultrasound scan. The detector’s efficiency was calculated and the results were analyzed using a clinical data processing algorithm developed in a previous study. Results. The detector’s total efficiency decreased by 3.5% for 18F and by 51.7% for 68Ga when using a phantom based on ultrasound scans compared to a generic wrist phantom. Similarly, the data processing algorithm’s accuracy decreased when using the patient-specific phantom, giving errors greater than 1.0% for both radioisotopes. Conclusions. This toolkit enables the user to run Geant4-based Monte Carlo simulations for dPET detector development applications using a patient-specific wrist phantom. Leading to a more precise simulation of the developed detector during dPET and the calculation of a personalized AIF.
The neuroconnectionist research programme
Adrien C. Doerig
R. Sommers
Katja Seeliger
J. Ismael
Grace W. Lindsay
Konrad Paul Kording
Talia Konkle
M. Gerven
Nikolaus Kriegeskorte
Tim Kietzmann
Let the Flows Tell: Solving Graph Combinatorial Optimization Problems with GFlowNets
Dinghuai Zhang
Hanjun Dai
Nikolay Malkin
Ling Pan
Combinatorial optimization (CO) problems are often NP-hard and thus out of reach for exact algorithms, making them a tempting domain to appl… (see more)y machine learning methods. The highly structured constraints in these problems can hinder either optimization or sampling directly in the solution space. On the other hand, GFlowNets have recently emerged as a powerful machinery to efficiently sample from composite unnormalized densities sequentially and have the potential to amortize such solution-searching processes in CO, as well as generate diverse solution candidates. In this paper, we design Markov decision processes (MDPs) for different combinatorial problems and propose to train conditional GFlowNets to sample from the solution space. Efficient training techniques are also developed to benefit long-range credit assignment. Through extensive experiments on a variety of different CO tasks with synthetic and realistic data, we demonstrate that GFlowNet policies can efficiently find high-quality solutions. Our implementation is open-sourced at https://github.com/zdhNarsil/GFlowNet-CombOpt.
Motor cortex latent dynamics encode arm movement direction and urgency independently
Andrea Colins Rodriguez
Lee Miller
Mark D. Humphries
Testing Feedforward Neural Networks Training Programs
Houssem Ben Braiek
An Examination of the Robustness of Reference-Free Image Captioning Evaluation Metrics
Saba Ahmadi
A hierarchical Bayesian brain parcellation framework for fusion of functional imaging datasets
Da Zhi
Ladan Shahshahani
Caroline Nettekoven
Ana Lúısa Pinho
Jörn Diedrichsen
Model evaluation for extreme risks
Toby Shevlane
Sebastian Farquhar
Ben Garfinkel
Mary Phuong
Jess Whittlestone
Jade Leung
Daniel Kokotajlo
Nahema A. Marchal
Markus Anderljung
Noam Kolt
Lewis Ho
Divya Siddarth
Shahar Avin
W. Hawkins
Been Kim
Iason Gabriel
Vijay Bolina
Jack Clark
Paul F. Christiano … (see 1 more)
Allan Dafoe
De novo motor learning creates structure in neural activity space that shapes adaptation
Joanna C. Chang
Lee Miller
Juan A. Gallego
Claudia Clopath
Realistically distributing object placements in synthetic training data improves the performance of vision-based object detection models
Setareh Dabiri
Vasileios Lioutas
Berend Zwartsenberg
Yunpeng Liu
Matthew Niedoba
Xiaoxuan Liang
Dylan Green
Justice Sefas
Jonathan Wilder Lavington
Adam Ścibior
When training object detection models on synthetic data, it is important to make the distribution of synthetic data as close as possible to … (see more)the distribution of real data. We investigate specifically the impact of object placement distribution, keeping all other aspects of synthetic data fixed. Our experiment, training a 3D vehicle detection model in CARLA and testing on KITTI, demonstrates a substantial improvement resulting from improving the object placement distribution.
Think Before You Act: Decision Transformers with Internal Working Memory
Jikun Kang
Romain Laroche
Xingdi Yuan
Adam Trischler
Jie Fu
Large language model (LLM)-based decision-making agents have shown the ability to generalize across multiple tasks. However, their performan… (see more)ce relies on massive data and compute. We argue that this inefficiency stems from the forgetting phenomenon, in which a model memorizes its behaviors in parameters throughout training. As a result, training on a new task may deteriorate the model's performance on previous tasks. In contrast to LLMs' implicit memory mechanism, the human brain utilizes distributed memory storage, which helps manage and organize multiple skills efficiently, mitigating the forgetting phenomenon. Thus inspired, we propose an internal working memory module to store, blend, and retrieve information for different downstream tasks. Evaluation results show that the proposed method improves training efficiency and generalization in both Atari games and meta-world object manipulation tasks. Moreover, we demonstrate that memory fine-tuning further enhances the adaptability of the proposed architecture.
Think Before You Act: Decision Transformers with Internal Working Memory
Jikun Kang
Romain Laroche
Xingdi Yuan
Adam P. Trischler
Xuefei Liu
Jie Fu
Large language model (LLM)-based decision-making agents have shown the ability to generalize across multiple tasks. However, their performan… (see more)ce relies on massive data and compute. We argue that this inefficiency stems from the forgetting phenomenon, in which a model memorizes its behaviors in parameters throughout training. As a result, training on a new task may deteriorate the model's performance on previous tasks. In contrast to LLMs' implicit memory mechanism, the human brain utilizes distributed memory storage, which helps manage and organize multiple skills efficiently, mitigating the forgetting phenomenon. Thus inspired, we propose an internal working memory module to store, blend, and retrieve information for different downstream tasks. Evaluation results show that the proposed method improves training efficiency and generalization in both Atari games and meta-world object manipulation tasks. Moreover, we demonstrate that memory fine-tuning further enhances the adaptability of the proposed architecture.