Portrait of Frank Wood is unavailable

Frank Wood

Affiliate Member
Canada CIFAR AI Chair
Associate Professor, University of British Columbia, Department of Computer Science
CEO, Inverted AI

Biography

Frank Wood is an associate professor of computer science at the University of British Columbia, and an Affiliate member at Mila – Quebec Artificial Intelligence Institute. He is also the CEO of Inverted AI.

His research interests include probabilistic programming, as well as automatic learning and probabilistic AI. He is particularly interested in Bayesian methods and unsupervised learning.

Publications

All-in-one simulation-based inference
Manuel Gloeckler
Michael Deistler
Christian Dietrich Weilbach
Jakob H. Macke
Layerwise Proximal Replay: A Proximal Point Method for Online Continual Learning
Jinsoo Yoo
Yunpeng Liu
Geoff Pleiss
Nearest Neighbour Score Estimators for Diffusion Generative Models
Matthew Niedoba
Dylan Green
Saeid Naderiparizi
Vasileios Lioutas
Jonathan Wilder Lavington
Xiaoxuan Liang
Yunpeng Liu
Ke Zhang
Setareh Dabiri
Adam Ścibior
Berend Zwartsenberg
Score function estimation is the cornerstone of both training and sampling from diffusion generative models. Despite this fact, the most com… (see more)monly used estimators are either biased neural network approximations or high variance Monte Carlo estimators based on the conditional score. We introduce a novel nearest neighbour score function estimator which utilizes multiple samples from the training set to dramatically decrease estimator variance. We leverage our low variance estimator in two compelling applications. Training consistency models with our estimator, we report a significant increase in both convergence speed and sample quality. In diffusion models, we show that our estimator can replace a learned network for probability-flow ODE integration, opening promising new avenues of future research. Code will be released upon paper acceptance.
On the Challenges and Opportunities in Generative AI
Laura Manduchi
Kushagra Pandey
Robert Bamler
Ryan Cotterell
Sina Daubener
Sophie Fellenz
Asja Fischer
Thomas Gartner
Matthias Kirchler
Marius Kloft
Yingzhen Li
Christoph Lippert
Gerard de Melo
Eric T. Nalisnick
Bjorn Ommer
Rajesh Ranganath
Maja Rudolph
Karen Ullrich
Guy Van den Broeck
Julia E Vogt … (see 5 more)
Yixin Wang
Florian Wenzel
Stephan Mandt
Vincent Fortuin
Nearest Neighbour Score Estimators for Diffusion Generative Models
Matthew Niedoba
Dylan Green
Saeid Naderiparizi
Vasileios Lioutas
Jonathan Wilder Lavington
Xiaoxuan Liang
Yunpeng Liu
Ke Zhang
Setareh Dabiri
Adam Ścibior
Berend Zwartsenberg
A Diffusion-Model of Joint Interactive Navigation
Matthew Niedoba
Jonathan Wilder Lavington
Yunpeng Liu
Vasileios Lioutas
Justice Sefas
Xiaoxuan Liang
Dylan Green
Setareh Dabiri
Berend Zwartsenberg
Adam Ścibior
Simulation of autonomous vehicle systems requires that simulated traffic participants exhibit diverse and realistic behaviors. The use of pr… (see more)erecorded real-world traffic scenarios in simulation ensures realism but the rarity of safety critical events makes large scale collection of driving scenarios expensive. In this paper, we present DJINN - a diffusion based method of generating traffic scenarios. Our approach jointly diffuses the trajectories of all agents, conditioned on a flexible set of state observations from the past, present, or future. On popular trajectory forecasting datasets, we report state of the art performance on joint trajectory metrics. In addition, we demonstrate how DJINN flexibly enables direct test-time sampling from a variety of valuable conditional distributions including goal-based sampling, behavior-class sampling, and scenario editing.
Don't be so negative! Score-based Generative Modeling with Oracle-assisted Guidance
Saeid Naderiparizi
Xiaoxuan Liang
Berend Zwartsenberg
Uncertain Evidence in Probabilistic Models and Stochastic Simulators
Andreas Munk
Alexander Mead
We consider the problem of performing Bayesian inference in probabilistic models where observations are accompanied by uncertainty, referred… (see more) to as "uncertain evidence.'' We explore how to interpret uncertain evidence, and by extension the importance of proper interpretation as it pertains to inference about latent variables. We consider a recently-proposed method "distributional evidence'' as well as revisit two older methods: Jeffrey's rule and virtual evidence. We devise guidelines on how to account for uncertain evidence and we provide new insights, particularly regarding consistency. To showcase the impact of different interpretations of the same uncertain evidence, we carry out experiments in which one interpretation is defined as "correct.'' We then compare inference results from each different interpretation illustrating the importance of careful consideration of uncertain evidence.
Scaling Graphically Structured Diffusion Models
Christian Dietrich Weilbach
William Harvey
Hamed Shirzad
Applications of the recently introduced graphically structured diffusion model (GSDM) family show that sparsifying the transformer attention… (see more) mechanism within a diffusion model and meta-training on a variety of conditioning tasks can yield an efficiently learnable diffusion model artifact that is capable of flexible, in the sense of observing different subsets of variables at test-time, amortized conditioning in probabilistic graphical models. While extremely promising in terms of applicability and utility, implementations of GSDMs prior to this work were not scalable beyond toy graphical model sizes. We overcome this limitation by describing and and solving two scaling issues related to GSDMs; one engineering and one methodological. We additionally propose a new benchmark problem of weight inference for a convolutional neural network applied to
Visual Chain-of-Thought Diffusion Models
William Harvey
Recent progress with conditional image diffusion models has been stunning, and this holds true whether we are speaking about models conditio… (see more)ned on a text description, a scene layout, or a sketch. Unconditional image diffusion models are also improving but lag behind, as do diffusion models which are conditioned on lower-dimensional features like class labels. We propose to close the gap between conditional and unconditional models using a two-stage sampling procedure. In the first stage we sample an embedding describing the semantic content of the image. In the second stage we sample the image conditioned on this embedding and then discard the embedding. Doing so lets us leverage the power of conditional diffusion models on the unconditional generation task, which we show improves FID by 25 - 50% compared to standard unconditional generation.
Realistically distributing object placements in synthetic training data improves the performance of vision-based object detection models
Setareh Dabiri
Vasileios Lioutas
Berend Zwartsenberg
Yunpeng Liu
Matthew Niedoba
Xiaoxuan Liang
Dylan Green
Justice Sefas
Jonathan Wilder Lavington
Adam Ścibior
When training object detection models on synthetic data, it is important to make the distribution of synthetic data as close as possible to … (see more)the distribution of real data. We investigate specifically the impact of object placement distribution, keeping all other aspects of synthetic data fixed. Our experiment, training a 3D vehicle detection model in CARLA and testing on KITTI, demonstrates a substantial improvement resulting from improving the object placement distribution.
Conditional Permutation Invariant Flows
Berend Zwartsenberg
Adam Ścibior
Matthew Niedoba
Vasileios Lioutas
Justice Sefas
Yunpeng Liu
Setareh Dabiri
Jonathan Wilder Lavington
Trevor Campbell
We present a novel, conditional generative probabilistic model of set-valued data with a tractable log density. This model is a continuous n… (see more)ormalizing flow governed by permutation equivariant dynamics. These dynamics are driven by a learnable per-set-element term and pairwise interactions, both parametrized by deep neural networks. We illustrate the utility of this model via applications including (1) complex traffic scene generation conditioned on visually specified map information, and (2) object bounding box generation conditioned directly on images. We train our model by maximizing the expected likelihood of labeled conditional data under our flow, with the aid of a penalty that ensures the dynamics are smooth and hence efficiently solvable. Our method significantly outperforms non-permutation invariant baselines in terms of log likelihood and domain-specific metrics (offroad, collision, and combined infractions), yielding realistic samples that are difficult to distinguish from real data.