Lifelong Topological Visual Navigation
Rey Reza Wiyatno
Anqi Xu
Commonly, learning-based topological navigation approaches produce a local policy while preserving some loose connectivity of the space thro… (see more)ugh a topological map. Nevertheless, spurious or missing edges in the topological graph often lead to navigation failure. In this work, we propose a sampling-based graph building method, which results in sparser graphs yet with higher navigation performance compared to baseline methods. We also propose graph maintenance strategies that eliminate spurious edges and expand the graph as needed, which improves lifelong navigation performance. Unlike controllers that learn from fixed training environments, we show that our model can be fine-tuned using only a small number of collected trajectory images from a real-world environment where the agent is deployed. We demonstrate successful navigation after fine-tuning on real-world environments, and notably show significant navigation improvements over time by applying our lifelong graph maintenance strategies.
MixEHR-Guided: A guided multi-modal topic modeling approach for large-scale automatic phenotyping using the electronic health record
Yuri Ahuja
Yuesong Zou
Aman Verma
Predicting histopathology markers of endometrial carcinoma with a quantitative image analysis approach based on spherical harmonics in multiparametric MRI.
Thierry L. Lefebvre
Ozan Ciga
Sahir Bhatnagar
Yoshiko Ueno
S. Saif
Eric Winter-Reinhold
Anthony Dohan
P. Soyer
Reza Forghani
Jan Seuntjens
Caroline Reinhold
Peter Savadjiev
Social isolation and the brain in the pandemic era
Robin I. M. Dunbar
Synthetic data as an enabler for machine learning applications in medicine
Jean-Francois Rajotte
Robert Bergen
Khaled El Emam
Raymond Ng
Elissa Strome
The use of artificial intelligence and virtual reality in doctor-patient risk communication: A scoping review.
Ryan Antel
Elena Guadagno
Jason M. Harley
Evolution of cell size control is canalized towards adders or sizers by cell cycle structure and selective pressures
Felix Proulx-Giraldeau
Jan M Skotheim
Cell size is controlled to be within a specific range to support physiological function. To control their size, cells use diverse mechanisms… (see more) ranging from ‘sizers’, in which differences in cell size are compensated for in a single cell division cycle, to ‘adders’, in which a constant amount of cell growth occurs in each cell cycle. This diversity raises the question why a particular cell would implement one rather than another mechanism? To address this question, we performed a series of simulations evolving cell size control networks. The size control mechanism that evolved was influenced by both cell cycle structure and specific selection pressures. Moreover, evolved networks recapitulated known size control properties of naturally occurring networks. If the mechanism is based on a G1 size control and an S/G2/M timer, as found for budding yeast and some human cells, adders likely evolve. But, if the G1 phase is significantly longer than the S/G2/M phase, as is often the case in mammalian cells in vivo, sizers become more likely. Sizers also evolve when the cell cycle structure is inverted so that G1 is a timer, while S/G2/M performs size control, as is the case for the fission yeast S. pombe. For some size control networks, cell size consistently decreases in each cycle until a burst of cell cycle inhibitor drives an extended G1 phase much like the cell division cycle of the green algae Chlamydomonas. That these size control networks evolved such self-organized criticality shows how the evolution of complex systems can drive the emergence of critical processes.
SPeCiaL: Self-Supervised Pretraining for Continual Learning
Lucas Caccia
From analytic to synthetic-organizational pluralisms: A pluralistic enactive psychiatry
Christophe Gauld
Kristopher Nielsen
Manon Job
Hugo Bottemanne
Recommendations and guidelines from the ISMRM Diffusion Study Group for preclinical diffusion MRI: Part 2 -- Ex vivo imaging
Kurt G Schilling
Francesco Grussu
Andrada Ianus
Brian Hansen
Manisha Aggarwal
Stijn Michielse
Fatima Nasrallah
Warda Syeda
Nian Wang
Jelle Veraart
Alard Roebroeck
Andrew F. Bagdasarian
Cornelius Eichner
Farshid Sepehrband
Jan Zimmermann
Ben Jeurissen
Lucio Frydman
Yohan van de Looij
David Hike
Jeff F. Dunn … (see 30 more)
Karla Miller
Bennett Landman
Noam Shemesh
Arthur Anderson
Emilie McKinnon
Shawna Farquharson
Flavio Dell’Acqua
Carlo Pierpaoli
Ivana Drobnjak
Alexander Leemans
Kevin D. Harkins
Maxime Descoteaux
Duan Xu
Mathieu D. Santin
Samuel C. Grant
Andre Obenaus
Gene S. Kim
Dan Wu
Denis Le Bihan
Stephen J. Blackband
Luisa Ciobanu
Els Fieremans
Ruiliang Bai
Trygve B. Leergaard
Jiangyang Zhang
Tim B. Dyrby
G. Allan Johnson
Matthew D. Budde
Ileana O. Jelescu
Estimating individual treatment effect on disability progression in multiple sclerosis using deep learning
Jean-Pierre R. Falet
Joshua D. Durso-Finley
Brennan Nichyporuk
Julien Schroeter
Francesca Bovis
Maria-Pia Sormani
Douglas Arnold
FedShuffle: Recipes for Better Use of Local Work in Federated Learning
Samuel Horváth
Maziar Sanjabi
Lin Xiao
Peter Richtárik
The practice of applying several local updates before aggregation across clients has been empirically shown to be a successful approach to o… (see more)vercoming the communication bottleneck in Federated Learning (FL). Such methods are usually implemented by having clients perform one or more epochs of local training per round while randomly reshuffling their finite dataset in each epoch. Data imbalance, where clients have different numbers of local training samples, is ubiquitous in FL applications, resulting in different clients performing different numbers of local updates in each round. In this work, we propose a general recipe, FedShuffle, that better utilizes the local updates in FL, especially in this regime encompassing random reshuffling and heterogeneity. FedShuffle is the first local update method with theoretical convergence guarantees that incorporates random reshuffling, data imbalance, and client sampling — features that are essential in large-scale cross-device FL. We present a comprehensive theoretical analysis of FedShuffle and show, both theoretically and empirically, that it does not suffer from the objective function mismatch that is present in FL methods that assume homogeneous updates in heterogeneous FL setups, such as FedAvg (McMahan et al., 2017). In addition, by combining the ingredients above, FedShuffle improves upon FedNova (Wang et al., 2020), which was previously proposed to solve this mismatch. Similar to Mime (Karimireddy et al., 2020), we show that FedShuffle with momentum variance reduction (Cutkosky & Orabona, 2019) improves upon non-local methods under a Hessian similarity assumption.