Portrait of Joey Bose

Joey Bose

Affiliate Member
University of Oxford, Department of Computer Science
Research Topics
AI for Science
Deep Learning
Generative Models
Geometric Deep Learning
Molecular Modeling

Publications

On the Stability of Iterative Retraining of Generative Models on their own Data
Quentin Bertrand
Alexandre Duplessis
Marco Jiralerspong
Deep generative models have made tremendous progress in modeling complex data, often exhibiting generation quality that surpasses a typical … (see more)human's ability to discern the authenticity of samples. Undeniably, a key driver of this success is enabled by the massive amounts of web-scale data consumed by these models. Due to these models' striking performance and ease of availability, the web will inevitably be increasingly populated with synthetic content. Such a fact directly implies that future iterations of generative models will be trained on both clean and artificially generated data from past models. In this paper, we develop a framework to rigorously study the impact of training generative models on mixed datasets---from classical training on real data to self-consuming generative models trained on purely synthetic data. We first prove the stability of iterative training under the condition that the initial generative models approximate the data distribution well enough and the proportion of clean training data (w.r.t. synthetic data) is large enough. We empirically validate our theory on both synthetic and natural images by iteratively training normalizing flows and state-of-the-art diffusion models on CIFAR10 and FFHQ.
Sequence-Augmented SE(3)-Flow Matching For Conditional Protein Generation.
Guillaume Huguet
James Vuckovic
Kilian FATRAS
Eric Laufer
Pablo Lemos
Riashat Islam
Cheng-Hao Liu
Jarrid Rector-Brooks
Tara Akhound-Sadegh
Michael M. Bronstein
Alexander Tong
A General Framework For Proving The Equivariant Strong Lottery Ticket Hypothesis
Damien Ferbach
Christos Tsirigotis
Avishek Joey Bose
The Strong Lottery Ticket Hypothesis (SLTH) stipulates the existence of a subnetwork within a sufficiently overparameterized (dense) neural … (see more)network that -- when initialized randomly and without any training -- achieves the accuracy of a fully trained target network. Recent works by Da Cunha et. al 2022; Burkholz 2022 demonstrate that the SLTH can be extended to translation equivariant networks -- i.e. CNNs -- with the same level of overparametrization as needed for the SLTs in dense networks. However, modern neural networks are capable of incorporating more than just translation symmetry, and developing general equivariant architectures such as rotation and permutation has been a powerful design principle. In this paper, we generalize the SLTH to functions that preserve the action of the group
Feature Likelihood Divergence: Evaluating the Generalization of Generative Models Using Samples
Marco Jiralerspong
Ian Gemp
Chongli Qin
Yoram Bachrach