Publications

Semantic Commit: Helping Users Update Intent Specifications for AI Memory at Scale
Priyan Vaithilingam
Frida-Cecilia Acosta-Parenteau
Daniel Lee
Amine Mhedhbi
Elena L. Glassman
Sliced-Wasserstein Distance-based Data Selection
We propose a new unsupervised anomaly detection method based on the sliced-Wasserstein distance for training data selection in machine learn… (see more)ing approaches. Our filtering technique is interesting for decision-making pipelines deploying machine learning models in critical sectors, e.g., power systems, as it offers a conservative data selection and an optimal transport interpretation. To ensure the scalability of our method, we provide two efficient approximations. The first approximation processes reduced-cardinality representations of the datasets concurrently. The second makes use of a computationally light Euclidian distance approximation. Additionally, we open the first dataset showcasing localized critical peak rebate demand response in a northern climate. We present the filtering patterns of our method on synthetic datasets and numerically benchmark our method for training data selection. Finally, we employ our method as part of a first forecasting benchmark for our open-source dataset.
TAPNext: Tracking Any Point (TAP) as Next Token Prediction
Carl Doersch
Yi Yang
Skanda Koppula
Viorica Patraucean
Xu Owen He
Ignacio Rocco
Mehdi S. M. Sajjadi
Towards Assessing Deep Learning Test Input Generators
Seif Mzoughi
Ahmed Haj Yahmed
Mohamed Elshafei
Diego Elias Costa
Trade‐off of different deep learning‐based auto‐segmentation approaches for treatment planning of pediatric craniospinal irradiation autocontouring of OARs for pediatric CSI
Alana Thibodeau‐Antonacci
Marija Popovic
Ozgur Ates
Chia‐Ho Hua
James Schneider
Sonia Skamene
Carolyn Freeman
James Man Git Tsui
As auto‐segmentation tools become integral to radiotherapy, more commercial products emerge. However, they may not always suit our needs. … (see more)One notable example is the use of adult‐trained commercial software for the contouring of organs at risk (OARs) of pediatric patients.
View-Dependent Deformation Fields for 2D Editing of 3D Models
Why do LLMs attend to the first token?
Federico Barbero
'Alvaro Arroyo
Xiangming Gu
Christos Perivolaropoulos
Michael M. Bronstein
Petar Veličković
NoProp: Training Neural Networks without Full Back-propagation or Full Forward-propagation
Qinyu Li
Yee Whye Teh
The canonical deep learning approach for learning requires computing a gradient term at each block by back-propagating the error signal from… (see more) the output towards each learnable parameter. Given the stacked structure of neural networks, where each block builds on the representation of the block below, this approach leads to hierarchical representations. More abstract features live on the top blocks of the model, while features on lower blocks are expected to be less abstract. In contrast to this, we introduce a new learning method named NoProp, which does not rely on either forward or backwards propagation across the entire network. Instead, NoProp takes inspiration from diffusion and flow matching methods, where each block independently learns to denoise a noisy target using only local targets and back-propagation within the block. We believe this work takes a first step towards introducing a new family of learning methods that does not learn hierarchical representations -- at least not in the usual sense. NoProp needs to fix the representation at each block beforehand to a noised version of the target, learning a local denoising process that can then be exploited at inference. We demonstrate the effectiveness of our method on MNIST, CIFAR-10, and CIFAR-100 image classification benchmarks. Our results show that NoProp is a viable learning algorithm, is easy to use and computationally efficient. By departing from the traditional learning paradigm which requires back-propagating a global error signal, NoProp alters how credit assignment is done within the network, enabling more efficient distributed learning as well as potentially impacting other characteristics of the learning process.
NoProp: Training Neural Networks without Back-propagation or Forward-propagation
Qinyu Li
Yee Whye Teh
NoProp: Training Neural Networks without Back-propagation or Forward-propagation
Qinyu Li
Yee Whye Teh
Language-Guided Trajectory Traversal in Disentangled Stable Diffusion Latent Space for Factorized Medical Image Generation
Language-Guided Trajectory Traversal in Disentangled Stable Diffusion Latent Space for Factorized Medical Image Generation
Text-to-image diffusion models have demonstrated a remarkable ability to generate photorealistic images from natural language prompts. These… (see more) high-resolution, language-guided synthesized images are essential for the explainability of disease or exploring causal relationships. However, their potential for disentangling and controlling latent factors of variation in specialized domains like medical imaging remains under-explored. In this work, we present the first investigation of the power of pre-trained vision-language foundation models, once fine-tuned on medical image datasets, to perform latent disentanglement for factorized medical image generation and interpolation. Through extensive experiments on chest X-ray and skin datasets, we illustrate that fine-tuned, language-guided Stable Diffusion inherently learns to factorize key attributes for image generation, such as the patient's anatomical structures or disease diagnostic features. We devise a framework to identify, isolate, and manipulate key attributes through latent space trajectory traversal of generative models, facilitating precise control over medical image synthesis.