BARVINN: Arbitrary Precision DNN Accelerator Controlled by a RISC-V CPU
Mohammadhossein Askarihemmat
Sean Wagner
O. Bilaniuk
Yassine Hariri
Yvon Savaria
J. David
We present a DNN accelerator that allows inference at arbitrary precision with dedicated processing elements that are configurable at the bi… (voir plus)t level. Our DNN accelerator has 8 Processing Elements controlled by a RISC-V controller with a combined 8.2 TMACs of computational power when implemented with the recent Alveo U250 FPGA platform. We develop a code generator tool that ingests CNN models in ONNX format and generates an executable com-mand stream for the RISC-V controller. We demonstrate the scalable throughput of our accelerator by running different DNN kernels and models when different quantization levels are selected. Compared to other low precision accelerators, our accelerator provides run time programmability without hardware reconfiguration and can accelerate DNNs with multiple quantization levels, regardless of the target FPGA size. BARVINN is an open source project and it is available at https://github.com/hossein1387/BARVINN.
Offline Policy Optimization in RL with Variance Regularizaton
Riashat Islam
Samarth Sinha
Homanga Bharadhwaj
Samin Yeasar Arnob
Zhuoran Yang
Animesh Garg
Zhaoran Wang
Lihong Li
Simplicity and learning to distinguish arguments from modifiers
Leon Bergen
E. Gibson
How programmers find online learning resources
Deeksha M. Arya
Martin P. Robillard
FaithDial: A Faithful Benchmark for Information-Seeking Dialogue
Nouha Dziri
Ehsan Kamalloo
Sivan Milton
Osmar Zaiane
Mo Yu
Edoardo Ponti
Abstract The goal of information-seeking dialogue is to respond to seeker queries with natural language utterances that are grounded on know… (voir plus)ledge sources. However, dialogue systems often produce unsupported utterances, a phenomenon known as hallucination. To mitigate this behavior, we adopt a data-centric solution and create FaithDial, a new benchmark for hallucination-free dialogues, by editing hallucinated responses in the Wizard of Wikipedia (WoW) benchmark. We observe that FaithDial is more faithful than WoW while also maintaining engaging conversations. We show that FaithDial can serve as training signal for: i) a hallucination critic, which discriminates whether an utterance is faithful or not, and boosts the performance by 12.8 F1 score on the BEGIN benchmark compared to existing datasets for dialogue coherence; ii) high-quality dialogue generation. We benchmark a series of state-of-the-art models and propose an auxiliary contrastive objective that achieves the highest level of faithfulness and abstractiveness based on several automated metrics. Further, we find that the benefits of FaithDial generalize to zero-shot transfer on other datasets, such as CMU-Dog and TopicalChat. Finally, human evaluation reveals that responses generated by models trained on FaithDial are perceived as more interpretable, cooperative, and engaging.
Post-hoc Interpretability for Neural NLP: A Survey
Andreas Madsen
Meta-topologies define distinct anatomical classes of brain tumours linked to histology and survival
Julius M Kernbach
Daniel Delev
Georg Neuloh
Hans Clusmann
Simon B. Eickhoff
Victor E Staartjes
Flavio Vasella
Michael Weller
Luca Regli
Carlo Serra
Niklaus Krayenbühl
Kevin Akeret
Towards Continual Reinforcement Learning: A Review and Perspectives
BLOOM+1: Adding Language Support to BLOOM for Zero-Shot Prompting
Zheng Xin Yong
Hailey Schoelkopf
Niklas Muennighoff
Alham Fikri Aji
Khalid Almubarak
M. Saiful Bari
Lintang A. Sutawika
Jungo Kasai
Ahmed Baruwa
Genta Indra Winata
Stella Biderman
Dragomir R. Radev
Vassilina Nikoulina
The BLOOM model is a large publicly available multilingual language model, but its pretraining was limited to 46 languages. To extend the be… (voir plus)nefits of BLOOM to other languages without incurring prohibitively large costs, it is desirable to adapt BLOOM to new languages not seen during pretraining. In this work, we apply existing language adaptation strategies to BLOOM and benchmark its zero-shot prompting performance on eight new languages in a resource-constrained setting. We find language adaptation to be effective at improving zero-shot performance in new languages. Surprisingly, we find that adapter-based finetuning is more effective than continued pretraining for large models. In addition, we discover that prompting performance is not significantly affected by language specifics, such as the writing system. It is primarily determined by the size of the language adaptation data. We also add new languages to BLOOMZ, which is a multitask finetuned version of BLOOM capable of following task instructions zero-shot. We find including a new language in the multitask fine-tuning mixture to be the most effective method to teach BLOOMZ a new language. We conclude that with sufficient training data language adaptation can generalize well to diverse languages. Our code is available at https://github.com/bigscience-workshop/multilingual-modeling.
Biomedical image analysis competitions: The state of current participation practice
Matthias Eisenmann
Annika Reinke
Vivienn Weru
Minu Dietlinde Tizabi
Fabian Isensee
T. Adler
PATRICK GODAU
Veronika Cheplygina
Michal Kozubek
Sharib Ali
Anubha Gupta
Jan. Kybic
Alison Professor Noble
Carlos Ortiz de Sol'orzano
Samiksha Pachade
Caroline Petitjean
Daniel Sage
Donglai Wei
Elizabeth Wilden
Deepak Alapatt … (voir 334 de plus)
Vincent Andrearczyk
Ujjwal Baid
Spyridon Bakas
Niranjan Balu
Sophia Bano
Vivek Singh Bawa
Jorge Bernal
Sebastian Bodenstedt
Alessandro Casella
Jinwook Choi
Olivier Commowick
M. Daum
Adrien Depeursinge
Reuben Dorent
J. Egger
H. Eichhorn
Sandy Engelhardt
Melanie Ganz
Gabriel Girard
Lasse Donovan Hansen
Mattias Paul Heinrich
Nicholas Heller
Alessa Hering
Arnaud Huaulm'e
Hyunjeong Kim
Bennett Landman
Hongwei Bran Li
Jianning Li
Junfang Ma
Anne L. Martel
Carlos Mart'in-Isla
Bjoern Menze
Chinedu Innocent Nwoye
Valentin Oreiller
Nicolas Padoy
Sarthak Pati
Kelly Payette
Carole H. Sudre
K. V. Wijnen
Armine Vardazaryan
Tom Kamiel Magda Vercauteren
Martin Wagner
Chuanbo Wang
Moi Hoon Yap
Zeyun Yu
Chuner Yuan
Maximilian Zenk
Aneeq Zia
David Zimmerer
Rina Bao
Chanyeol Choi
Andrew Cohen
Oleh Dzyubachyk
Adrian Galdran
Tianyuan Gan
Tianqi Guo
Pradyumna Gupta
M. Haithami
Edward Ho
Ikbeom Jang
Zhili Li
Zheng Luo
Filip Lux
Sokratis Makrogiannis
Dominikus Muller
Young-Tack Oh
Subeen Pang
Constantin Pape
Görkem Polat
Charlotte Rosalie Reed
Kanghyun Ryu
Tim Scherr
Vajira L. Thambawita
Haoyu Wang
Xinliang Wang
Kele Xu
H.-I. Yeh
Doyeob Yeo
Yi Yuan
Yan Zeng
Xingwen Zhao
Julian Ronald Abbing
Jannes Adam
Nagesh Adluru
Niklas Agethen
S. Ahmed
Yasmina Al Khalil
Mireia Alenya
Esa J. Alhoniemi
C. An
Talha E Anwar
Tewodros Arega
Netanell Avisdris
D. Aydogan
Yi-Shi Bai
Maria Baldeon Calisto
Berke Doga Basaran
Marcel Beetz
Cheng Bian
Hao-xuan Bian
Kevin Blansit
Louise Bloch
Robert Bohnsack
Sara Bosticardo
J. Breen
Mikael Brudfors
Raphael Brungel
Mariano Cabezas
Alberto Cacciola
Zhiwei Chen
Yucong Chen
Dan Chen
Minjeong Cho
Min-Kook Choi
Chuantao Xie Chuantao Xie
Dana Cobzas
Jorge Corral Acero
Sujit Kumar Das
Marcela de Oliveira
Hanqiu Deng
Guiming Dong
Lars Doorenbos
Cory Efird
Di Fan
Mehdi Fatan Serj
Alexandre Fenneteau
Lucas Fidon
Patryk Filipiak
Ren'e Finzel
Nuno Renato Freitas
C. Friedrich
Mitchell J. Fulton
Finn Gaida
Francesco Galati
Christoforos Galazis
Changna Gan
Zheyao Gao
Sheng Gao
Matej Gazda
Beerend G. A. Gerats
Neil Getty
Adam Gibicar
Ryan J. Gifford
Sajan Gohil
Maria Grammatikopoulou
Daniel Grzech
Orhun Guley
Timo Gunnemann
Chun-Hai Guo
Sylvain Guy
Heonjin Ha
Luyi Han
Ilseok Han
Ali Hatamizadeh
Tianhai He
Ji-Wu Heo
Sebastian Hitziger
SeulGi Hong
Seungbum Hong
Rian Huang
Zi-You Huang
Markus Huellebrand
Stephan Huschauer
M. Hussain
Tomoo Inubushi
Ece Isik Polat
Mojtaba Jafaritadi
Seonghun Jeong
Bailiang Jian
Yu Jiang
Zhifan Jiang
Yu Jin
Smriti Joshi
A. Kadkhodamohammadi
R. A. Kamraoui
Inhak Kang
Jun-Su Kang
Davood Karimi
April Ellahe Khademi
Muhammad Irfan Khan
Suleiman A. Khan
Rishab Khantwal
Kwang-Ju Kim
Timothy Lee Kline
Satoshi Kondo
Elina Kontio
Adrian Krenzer
Artem Kroviakov
Hugo J. Kuijf
Satyadwyoom Kumar
Francesco La Rosa
Abhishek Lad
Doohee Lee
Minho Lee
Chiara Lena
Hao Li
Ling Li
Xingyu Li
F. Liao
Kuan-Ya Liao
Arlindo L. Oliveira
Chaonan Lin
Shanhai Lin
Akis Linardos
M. Linguraru
Han Liu
Tao Liu
Dian Liu
Yanling Liu
Joao Lourencco-Silva
Jing Lu
Jia Lu
Imanol Luengo
Christina Bach Lund
Huan Minh Luu
Yingqi Lv
Uzay Macar
Leon Maechler
L. SinaMansour
Kenji Marshall
Moona Mazher
Richard McKinley
Alfonso Medela
Felix Meissen
Mingyuan Meng
Dylan Bradley Miller
S. Mirjahanmardi
Arnab Kumar Mishra
Samir Mitha
Hassan Mohy-ud-Din
Tony C. W. Mok
Gowtham Krishnan Murugesan
Enamundram Naga Karthik
Sahil Nalawade
Jakub Nalepa
M. Naser
Ramin Nateghi
Hammad Naveed
Quang-Minh Nguyen
Cuong Nguyen Quoc
Brennan Nichyporuk
Bruno Oliveira
David Owen
Jimut Bahan Pal
Junwen Pan
W. Pan
Winnie Pang
Bogyu Park
Vivek G. Pawar
K. Pawar
Michael Peven
Lena Philipp
Tomasz Pieciak
Szymon S Płotka
Marcel Plutat
Fattane Pourakpour
Domen Prelovznik
K. Punithakumar
Abdul Qayyum
Sandro Queir'os
Arman Rahmim
Salar Razavi
Jintao Ren
Mina Rezaei
Jonathan Adam Rico
ZunHyan Rieu
Markus Rink
Johannes Roth
Yusely Ruiz-gonzalez
Numan Saeed
Anindo Saha
Mostafa M. Sami Salem
Ricardo Sanchez-matilla
Kurt G Schilling
Weizhen Shao
Zhiqiang Shen
Ruize Shi
Pengcheng Shi
Daniel Sobotka
Th'eodore Soulier
Bella Specktor Fadida
D. Stoyanov
Timothy Sum Hon Mun
Xiao-Fu Sun
Rong Tao
Franz Thaler
Antoine Th'eberge
Felix Thielke
Helena R. Torres
K. Wahid
Jiacheng Wang
Yifei Wang
W. Wang
Xiong Jun Wang
Jianhui Wen
Ning Wen
Marek Wodziński
Yehong Wu
Fangfang Xia
Tianqi Xiang
Cheng Xiaofei
Lizhang Xu
Tingting Xue
Yu‐Xia Yang
Lingxian Yang
Kai Yao
Huifeng Yao
Amirsaeed Yazdani
Michael Yip
Hwa-Seong Yoo
Fereshteh Yousefirizi
Shu-Fen Yu
Lei Yu
Jonathan Zamora
Ramy Ashraf Zeineldin
Dewen Zeng
Jianpeng Zhang
Bokai Zhang
Jiapeng Zhang
Fangxi Zhang
Huahong Zhang
Zhongchen Zhao
Zixuan Zhao
Jia Zhao
Can Zhao
Q. Zheng
Yuheng Zhi
Ziqi Zhou
Baosheng Zou
Klaus Maier-Hein
PAUL F. JÄGER
Annette Kopp-Schneider
Lena Maier-Hein
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practic… (voir plus)e. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
Biomedical image analysis competitions: The state of current participation practice
Matthias Eisenmann
Annika Reinke
Vivienn Weru
Minu Dietlinde Tizabi
Fabian Isensee
T. Adler
PATRICK GODAU
Veronika Cheplygina
Michal Kozubek
Sharib Ali
Anubha Gupta
Jan. Kybic
Alison Professor Noble
Carlos Ortiz de Sol'orzano
Samiksha Pachade
Caroline Petitjean
Daniel Sage
Donglai Wei
Elizabeth Wilden
Deepak Alapatt … (voir 334 de plus)
Vincent Andrearczyk
Ujjwal Baid
Spyridon Bakas
Niranjan Balu
Sophia Bano
Vivek Singh Bawa
Jorge Bernal
Sebastian Bodenstedt
Alessandro Casella
Jinwook Choi
Olivier Commowick
M. Daum
Adrien Depeursinge
Reuben Dorent
J. Egger
H. Eichhorn
Sandy Engelhardt
Melanie Ganz
Gabriel Girard
Lasse Donovan Hansen
Mattias Paul Heinrich
Nicholas Heller
Alessa Hering
Arnaud Huaulm'e
Hyunjeong Kim
Bennett Landman
Hongwei Bran Li
Jianning Li
Junfang Ma
Anne L. Martel
Carlos Mart'in-Isla
Bjoern Menze
Chinedu Innocent Nwoye
Valentin Oreiller
Nicolas Padoy
Sarthak Pati
Kelly Payette
Carole H. Sudre
K. V. Wijnen
Armine Vardazaryan
Tom Kamiel Magda Vercauteren
Martin Wagner
Chuanbo Wang
Moi Hoon Yap
Zeyun Yu
Chuner Yuan
Maximilian Zenk
Aneeq Zia
David Zimmerer
Rina Bao
Chanyeol Choi
Andrew Cohen
Oleh Dzyubachyk
Adrian Galdran
Tianyuan Gan
Tianqi Guo
Pradyumna Gupta
M. Haithami
Edward Ho
Ikbeom Jang
Zhili Li
Zheng Luo
Filip Lux
Sokratis Makrogiannis
Dominikus Muller
Young-Tack Oh
Subeen Pang
Constantin Pape
Görkem Polat
Charlotte Rosalie Reed
Kanghyun Ryu
Tim Scherr
Vajira L. Thambawita
Haoyu Wang
Xinliang Wang
Kele Xu
H.-I. Yeh
Doyeob Yeo
Yi Yuan
Yan Zeng
Xingwen Zhao
Julian Ronald Abbing
Jannes Adam
Nagesh Adluru
Niklas Agethen
S. Ahmed
Yasmina Al Khalil
Mireia Alenya
Esa J. Alhoniemi
C. An
Talha E Anwar
Tewodros Arega
Netanell Avisdris
D. Aydogan
Yi-Shi Bai
Maria Baldeon Calisto
Berke Doga Basaran
Marcel Beetz
Cheng Bian
Hao-xuan Bian
Kevin Blansit
Louise Bloch
Robert Bohnsack
Sara Bosticardo
J. Breen
Mikael Brudfors
Raphael Brungel
Mariano Cabezas
Alberto Cacciola
Zhiwei Chen
Yucong Chen
Dan Chen
Minjeong Cho
Min-Kook Choi
Chuantao Xie Chuantao Xie
Dana Cobzas
Jorge Corral Acero
Sujit Kumar Das
Marcela de Oliveira
Hanqiu Deng
Guiming Dong
Lars Doorenbos
Cory Efird
Di Fan
Mehdi Fatan Serj
Alexandre Fenneteau
Lucas Fidon
Patryk Filipiak
Ren'e Finzel
Nuno Renato Freitas
C. Friedrich
Mitchell J. Fulton
Finn Gaida
Francesco Galati
Christoforos Galazis
Changna Gan
Zheyao Gao
Sheng Gao
Matej Gazda
Beerend G. A. Gerats
Neil Getty
Adam Gibicar
Ryan J. Gifford
Sajan Gohil
Maria Grammatikopoulou
Daniel Grzech
Orhun Guley
Timo Gunnemann
Chun-Hai Guo
Sylvain Guy
Heonjin Ha
Luyi Han
Ilseok Han
Ali Hatamizadeh
Tianhai He
Ji-Wu Heo
Sebastian Hitziger
SeulGi Hong
Seungbum Hong
Rian Huang
Zi-You Huang
Markus Huellebrand
Stephan Huschauer
M. Hussain
Tomoo Inubushi
Ece Isik Polat
Mojtaba Jafaritadi
Seonghun Jeong
Bailiang Jian
Yu Jiang
Zhifan Jiang
Yu Jin
Smriti Joshi
A. Kadkhodamohammadi
R. A. Kamraoui
Inhak Kang
Jun-Su Kang
Davood Karimi
April Ellahe Khademi
Muhammad Irfan Khan
Suleiman A. Khan
Rishab Khantwal
Kwang-Ju Kim
Timothy Lee Kline
Satoshi Kondo
Elina Kontio
Adrian Krenzer
Artem Kroviakov
Hugo J. Kuijf
Satyadwyoom Kumar
Francesco La Rosa
Abhishek Lad
Doohee Lee
Minho Lee
Chiara Lena
Hao Li
Ling Li
Xingyu Li
F. Liao
Kuan-Ya Liao
Arlindo L. Oliveira
Chaonan Lin
Shanhai Lin
Akis Linardos
M. Linguraru
Han Liu
Tao Liu
Dian Liu
Yanling Liu
Joao Lourencco-Silva
Jing Lu
Jia Lu
Imanol Luengo
Christina Bach Lund
Huan Minh Luu
Yingqi Lv
Uzay Macar
Leon Maechler
L. SinaMansour
Kenji Marshall
Moona Mazher
Richard McKinley
Alfonso Medela
Felix Meissen
Mingyuan Meng
Dylan Bradley Miller
S. Mirjahanmardi
Arnab Kumar Mishra
Samir Mitha
Hassan Mohy-ud-Din
Tony C. W. Mok
Gowtham Krishnan Murugesan
Enamundram Naga Karthik
Sahil Nalawade
Jakub Nalepa
M. Naser
Ramin Nateghi
Hammad Naveed
Quang-Minh Nguyen
Cuong Nguyen Quoc
Brennan Nichyporuk
Bruno Oliveira
David Owen
Jimut Bahan Pal
Junwen Pan
W. Pan
Winnie Pang
Bogyu Park
Vivek G. Pawar
K. Pawar
Michael Peven
Lena Philipp
Tomasz Pieciak
Szymon S Płotka
Marcel Plutat
Fattane Pourakpour
Domen Prelovznik
K. Punithakumar
Abdul Qayyum
Sandro Queir'os
Arman Rahmim
Salar Razavi
Jintao Ren
Mina Rezaei
Jonathan Adam Rico
ZunHyan Rieu
Markus Rink
Johannes Roth
Yusely Ruiz-gonzalez
Numan Saeed
Anindo Saha
Mostafa M. Sami Salem
Ricardo Sanchez-matilla
Kurt G Schilling
Weizhen Shao
Zhiqiang Shen
Ruize Shi
Pengcheng Shi
Daniel Sobotka
Th'eodore Soulier
Bella Specktor Fadida
D. Stoyanov
Timothy Sum Hon Mun
Xiao-Fu Sun
Rong Tao
Franz Thaler
Antoine Th'eberge
Felix Thielke
Helena R. Torres
K. Wahid
Jiacheng Wang
Yifei Wang
W. Wang
Xiong Jun Wang
Jianhui Wen
Ning Wen
Marek Wodziński
Yehong Wu
Fangfang Xia
Tianqi Xiang
Cheng Xiaofei
Lizhang Xu
Tingting Xue
Yu‐Xia Yang
Lingxian Yang
Kai Yao
Huifeng Yao
Amirsaeed Yazdani
Michael Yip
Hwa-Seong Yoo
Fereshteh Yousefirizi
Shu-Fen Yu
Lei Yu
Jonathan Zamora
Ramy Ashraf Zeineldin
Dewen Zeng
Jianpeng Zhang
Bokai Zhang
Jiapeng Zhang
Fangxi Zhang
Huahong Zhang
Zhongchen Zhao
Zixuan Zhao
Jia Zhao
Can Zhao
Q. Zheng
Yuheng Zhi
Ziqi Zhou
Baosheng Zou
Klaus Maier-Hein
PAUL F. JÄGER
Annette Kopp-Schneider
Lena Maier-Hein
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practic… (voir plus)e. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
Biomedical image analysis competitions: The state of current participation practice
Matthias Eisenmann
Annika Reinke
Vivienn Weru
Minu Dietlinde Tizabi
Fabian Isensee
T. Adler
PATRICK GODAU
Veronika Cheplygina
Michal Kozubek
Sharib Ali
Anubha Gupta
Jan. Kybic
Alison Professor Noble
Carlos Ortiz de Sol'orzano
Samiksha Pachade
Caroline Petitjean
Daniel Sage
Donglai Wei
Elizabeth Wilden
Deepak Alapatt … (voir 334 de plus)
Vincent Andrearczyk
Ujjwal Baid
Spyridon Bakas
Niranjan Balu
Sophia Bano
Vivek Singh Bawa
Jorge Bernal
Sebastian Bodenstedt
Alessandro Casella
Jinwook Choi
Olivier Commowick
M. Daum
Adrien Depeursinge
Reuben Dorent
J. Egger
H. Eichhorn
Sandy Engelhardt
Melanie Ganz
Gabriel Girard
Lasse Donovan Hansen
Mattias Paul Heinrich
Nicholas Heller
Alessa Hering
Arnaud Huaulm'e
Hyunjeong Kim
Bennett Landman
Hongwei Bran Li
Jianning Li
Junfang Ma
Anne L. Martel
Carlos Mart'in-Isla
Bjoern Menze
Chinedu Innocent Nwoye
Valentin Oreiller
Nicolas Padoy
Sarthak Pati
Kelly Payette
Carole H. Sudre
K. V. Wijnen
Armine Vardazaryan
Tom Kamiel Magda Vercauteren
Martin Wagner
Chuanbo Wang
Moi Hoon Yap
Zeyun Yu
Chuner Yuan
Maximilian Zenk
Aneeq Zia
David Zimmerer
Rina Bao
Chanyeol Choi
Andrew Cohen
Oleh Dzyubachyk
Adrian Galdran
Tianyuan Gan
Tianqi Guo
Pradyumna Gupta
M. Haithami
Edward Ho
Ikbeom Jang
Zhili Li
Zheng Luo
Filip Lux
Sokratis Makrogiannis
Dominikus Muller
Young-Tack Oh
Subeen Pang
Constantin Pape
Görkem Polat
Charlotte Rosalie Reed
Kanghyun Ryu
Tim Scherr
Vajira L. Thambawita
Haoyu Wang
Xinliang Wang
Kele Xu
H.-I. Yeh
Doyeob Yeo
Yi Yuan
Yan Zeng
Xingwen Zhao
Julian Ronald Abbing
Jannes Adam
Nagesh Adluru
Niklas Agethen
S. Ahmed
Yasmina Al Khalil
Mireia Alenya
Esa J. Alhoniemi
C. An
Talha E Anwar
Tewodros Arega
Netanell Avisdris
D. Aydogan
Yi-Shi Bai
Maria Baldeon Calisto
Berke Doga Basaran
Marcel Beetz
Cheng Bian
Hao-xuan Bian
Kevin Blansit
Louise Bloch
Robert Bohnsack
Sara Bosticardo
J. Breen
Mikael Brudfors
Raphael Brungel
Mariano Cabezas
Alberto Cacciola
Zhiwei Chen
Yucong Chen
Dan Chen
Minjeong Cho
Min-Kook Choi
Chuantao Xie Chuantao Xie
Dana Cobzas
Jorge Corral Acero
Sujit Kumar Das
Marcela de Oliveira
Hanqiu Deng
Guiming Dong
Lars Doorenbos
Cory Efird
Di Fan
Mehdi Fatan Serj
Alexandre Fenneteau
Lucas Fidon
Patryk Filipiak
Ren'e Finzel
Nuno Renato Freitas
C. Friedrich
Mitchell J. Fulton
Finn Gaida
Francesco Galati
Christoforos Galazis
Changna Gan
Zheyao Gao
Sheng Gao
Matej Gazda
Beerend G. A. Gerats
Neil Getty
Adam Gibicar
Ryan J. Gifford
Sajan Gohil
Maria Grammatikopoulou
Daniel Grzech
Orhun Guley
Timo Gunnemann
Chun-Hai Guo
Sylvain Guy
Heonjin Ha
Luyi Han
Ilseok Han
Ali Hatamizadeh
Tianhai He
Ji-Wu Heo
Sebastian Hitziger
SeulGi Hong
Seungbum Hong
Rian Huang
Zi-You Huang
Markus Huellebrand
Stephan Huschauer
M. Hussain
Tomoo Inubushi
Ece Isik Polat
Mojtaba Jafaritadi
Seonghun Jeong
Bailiang Jian
Yu Jiang
Zhifan Jiang
Yu Jin
Smriti Joshi
A. Kadkhodamohammadi
R. A. Kamraoui
Inhak Kang
Jun-Su Kang
Davood Karimi
April Ellahe Khademi
Muhammad Irfan Khan
Suleiman A. Khan
Rishab Khantwal
Kwang-Ju Kim
Timothy Lee Kline
Satoshi Kondo
Elina Kontio
Adrian Krenzer
Artem Kroviakov
Hugo J. Kuijf
Satyadwyoom Kumar
Francesco La Rosa
Abhishek Lad
Doohee Lee
Minho Lee
Chiara Lena
Hao Li
Ling Li
Xingyu Li
F. Liao
Kuan-Ya Liao
Arlindo L. Oliveira
Chaonan Lin
Shanhai Lin
Akis Linardos
M. Linguraru
Han Liu
Tao Liu
Dian Liu
Yanling Liu
Joao Lourencco-Silva
Jing Lu
Jia Lu
Imanol Luengo
Christina Bach Lund
Huan Minh Luu
Yingqi Lv
Uzay Macar
Leon Maechler
L. SinaMansour
Kenji Marshall
Moona Mazher
Richard McKinley
Alfonso Medela
Felix Meissen
Mingyuan Meng
Dylan Bradley Miller
S. Mirjahanmardi
Arnab Kumar Mishra
Samir Mitha
Hassan Mohy-ud-Din
Tony C. W. Mok
Gowtham Krishnan Murugesan
Enamundram Naga Karthik
Sahil Nalawade
Jakub Nalepa
M. Naser
Ramin Nateghi
Hammad Naveed
Quang-Minh Nguyen
Cuong Nguyen Quoc
Brennan Nichyporuk
Bruno Oliveira
David Owen
Jimut Bahan Pal
Junwen Pan
W. Pan
Winnie Pang
Bogyu Park
Vivek G. Pawar
K. Pawar
Michael Peven
Lena Philipp
Tomasz Pieciak
Szymon S Płotka
Marcel Plutat
Fattane Pourakpour
Domen Prelovznik
K. Punithakumar
Abdul Qayyum
Sandro Queir'os
Arman Rahmim
Salar Razavi
Jintao Ren
Mina Rezaei
Jonathan Adam Rico
ZunHyan Rieu
Markus Rink
Johannes Roth
Yusely Ruiz-gonzalez
Numan Saeed
Anindo Saha
Mostafa M. Sami Salem
Ricardo Sanchez-matilla
Kurt G Schilling
Weizhen Shao
Zhiqiang Shen
Ruize Shi
Pengcheng Shi
Daniel Sobotka
Th'eodore Soulier
Bella Specktor Fadida
D. Stoyanov
Timothy Sum Hon Mun
Xiao-Fu Sun
Rong Tao
Franz Thaler
Antoine Th'eberge
Felix Thielke
Helena R. Torres
K. Wahid
Jiacheng Wang
Yifei Wang
W. Wang
Xiong Jun Wang
Jianhui Wen
Ning Wen
Marek Wodziński
Yehong Wu
Fangfang Xia
Tianqi Xiang
Cheng Xiaofei
Lizhang Xu
Tingting Xue
Yu‐Xia Yang
Lingxian Yang
Kai Yao
Huifeng Yao
Amirsaeed Yazdani
Michael Yip
Hwa-Seong Yoo
Fereshteh Yousefirizi
Shu-Fen Yu
Lei Yu
Jonathan Zamora
Ramy Ashraf Zeineldin
Dewen Zeng
Jianpeng Zhang
Bokai Zhang
Jiapeng Zhang
Fangxi Zhang
Huahong Zhang
Zhongchen Zhao
Zixuan Zhao
Jia Zhao
Can Zhao
Q. Zheng
Yuheng Zhi
Ziqi Zhou
Baosheng Zou
Klaus Maier-Hein
PAUL F. JÄGER
Annette Kopp-Schneider
Lena Maier-Hein
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practic… (voir plus)e. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.