Portrait of Karsten Roth is unavailable

Karsten Roth

Alumni

Publications

Disentanglement of Correlated Factors via Hausdorff Factorized Support
Mark Ibrahim
Zeynep Akata
Diane Bouchacourt
A grand goal in deep learning research is to learn representations capable of generalizing across distribution shifts. Disentanglement is on… (see more)e promising direction aimed at aligning a model's representation with the underlying factors generating the data (e.g. color or background). Existing disentanglement methods, however, rely on an often unrealistic assumption: that factors are statistically independent. In reality, factors (like object color and shape) are correlated. To address this limitation, we consider the use of a relaxed disentanglement criterion -- the Hausdorff Factorized Support (HFS) criterion -- that encourages only pairwise factorized \emph{support}, rather than a factorial distribution, by minimizing a Hausdorff distance. This allows for arbitrary distributions of the factors over their support, including correlations between them. We show that the use of HFS consistently facilitates disentanglement and recovery of ground-truth factors across a variety of correlation settings and benchmarks, even under severe training correlations and correlation shifts, with in parts over
The Liver Tumor Segmentation Benchmark (LiTS)
Patrick Bilic
Patrick Christ
Hongwei Bran Li
Grzegorz Chlebus
Hao Chen
Qi Dou
Chi-Wing Fu
Xu Han
Gabriel Efrain Humpire Mamani
Pheng Ann Heng
Jürgen Hesser
Samuel Kadoury
Julian Walter Holch
Tomasz Konopczynski
Miao Yue
Chunming Li
X. Li
Jana Lipková
John Lowengrub … (see 99 more)
Michal Marianne Amitai
Hans Meine
J. Moltz
Marie Piraud
Ivan Ezhov
Xiaojuan Qi
Fernando Navarro
Jin Qi
Florian Kofler
Markus Rempfler
Johannes C. Paetzold
Suprosanna Shit
Andrea Schenk
Xiaobin Hu
Anjany Sekuboyina
Ping Zhou
Christian Hülsemeyer
Marcel Beetz
Jan Kirschke
Florian Ettlinger
Felix Gruen
Benedikt Wiestler
Zhiheng Zhang
Georgios Kaissis
Fabian Lohöfer
Rickmer Braren
J. Holch
Michela Antonelli
Felix Hofmann
Woong Bae
Wieland Sommer
Míriam Bellver
Volker Heinemann
Lei Bi
Colin Jacobs
G. Mamani
Bram van Ginneken
Erik B. Dam
Gabriel Chartrand
An Tang
Bogdan Georgescu
Avi Ben-Cohen
Xavier Giró-i-Nieto
Eyal Klang
M. Amitai
E. Konen
Hayit Greenspan
Johan Moreau
Jan Hendrik Moltz
Alexandre Hostettler
Christian Igel
Luc Soler
Fabian Isensee
Refael Vivanti
Paul Jäger
Adi Szeskin
Fucang Jia
Naama Lev-Cohain
Krishna Chaitanya Kaluva
Jacob Sosna
Mahendra Khened
Leo Joskowicz
Ildoo Kim
Bjoern Menze
Jae-Hun Kim
Zengming Shen
Sungwoong Kim
Simon Kohl
Avinash Kori
Ganapathy Krishnamurthi
Fan Li
Hongchao Li
Junbo Li
Xiaomeng Li
Jun Ma
Klaus Maier-Hein
Kevis-Kokitsi Maninis
Dorit Merhof
Akshay Pai
Mathias Perslev
Jens Petersen
Jordi Pont-Tuset
Oliver Rippel
Ignacio Sarasua
Jordi Torres
Christian Wachinger
Chunliang Wang
Leon Weninger
Jianrong Wu
Daguang Xu
Xiaoping Yang
Simon Chun-Ho Yu
Yading Yuan
Liping Zhang
Jorge Cardoso
Spyridon Bakas
Uniform Priors for Data-Efficient Learning
Samarth Sinha
Marzyeh Ghassemi
Zeynep Akata
Animesh Garg
Few or zero-shot adaptation to novel tasks is important for the scalability and deployment of machine learning models. It is therefore cruci… (see more)al to find properties that encourage more transferable features in deep networks for generalization. In this paper, we show that models that learn uniformly distributed features from the training data, are able to perform better transfer learning at test-time. Motivated by this, we evaluate our method: uniformity regularization (UR) on its ability to facilitate adaptation to unseen tasks and data on six distinct domains: Few-Learning with Images, Few-shot Learning with Language, Deep Metric Learning, 0-Shot Domain Adaptation, Out-of-Distribution classification, and Neural Radiance Fields. Across all experiments, we show that using UR, we are able to learn robust vision systems which consistently offer benefits over baselines trained without uniformity regularization and are able to achieve state-of-the-art performance in Deep Metric Learning, Few-shot learning with images and language.
DiVA: Diverse Visual Feature Aggregation for Deep Metric Learning
Timo Milbich
Samarth Sinha
Björn Ommer
Predicting COVID-19 Pneumonia Severity on Chest X-ray With Deep Learning
Beiyi Shen
Almas F Abbasi
Hoshmand Kochi Mahsa
Marzyeh Ghassemi
Haifang Li
Tim Q Duong
Introduction The need to streamline patient management for coronavirus disease-19 (COVID-19) has become more pressing than ever. Chest X-ray… (see more)s (CXRs) provide a non-invasive (potentially bedside) tool to monitor the progression of the disease. In this study, we present a severity score prediction model for COVID-19 pneumonia for frontal chest X-ray images. Such a tool can gauge the severity of COVID-19 lung infections (and pneumonia in general) that can be used for escalation or de-escalation of care as well as monitoring treatment efficacy, especially in the ICU. Methods Images from a public COVID-19 database were scored retrospectively by three blinded experts in terms of the extent of lung involvement as well as the degree of opacity. A neural network model that was pre-trained on large (non-COVID-19) chest X-ray datasets is used to construct features for COVID-19 images which are predictive for our task. Results This study finds that training a regression model on a subset of the outputs from this pre-trained chest X-ray model predicts our geographic extent score (range 0-8) with 1.14 mean absolute error (MAE) and our lung opacity score (range 0-6) with 0.78 MAE. Conclusions These results indicate that our model’s ability to gauge the severity of COVID-19 lung infections could be used for escalation or de-escalation of care as well as monitoring treatment efficacy, especially in the ICU. To enable follow up work, we make our code, labels, and data available online.