Publications

Mapping parallelism in a functional IR through constraint satisfaction: a case study on convolution for mobile GPUs
Naums Mogers
Lu Li
Valentin Radu
Graphics Processing Units (GPUs) are notoriously hard to optimize for manually. What is needed are good automatic code generators and optimi… (see more)zers. Accelerate, Futhark and Lift demonstrated that a functional approach is well suited for this challenge. Lift, for instance, uses a system of rewrite rules with a multi-stage approach. Algorithmic optimizations are first explored, followed by hardware-specific optimizations such as using shared memory and mapping parallelism. While the algorithmic exploration leads to correct transformed programs by construction, it is not necessarily true for the latter phase. Exploiting shared memory and mapping parallelism while ensuring correct synchronization is a delicate balancing act, and is hard to encode in a rewrite system. Currently, Lift relies on heuristics with ad-hoc mechanisms to check for correctness. Although this practical approach eventually produces high-performance code, it is not an ideal state of affairs. This paper proposes to extract parallelization constraints automatically from a functional IR and use a solver to identify valid rewriting. Using a convolutional neural network on a mobile GPU as a use case, this approach matches the performance of the ARM Compute Library GEMM convolution and the TVM-generated kernel consuming between 2.7x and 3.6x less memory on average. Furthermore, a speedup of 12x is achieved over the ARM Compute Library direct convolution implementation.
Mapping parallelism in a functional IR through constraint satisfaction: a case study on convolution for mobile GPUs
Naums Mogers
Lu Li
Valentin Radu
Graphics Processing Units (GPUs) are notoriously hard to optimize for manually. What is needed are good automatic code generators and optimi… (see more)zers. Accelerate, Futhark and Lift demonstrated that a functional approach is well suited for this challenge. Lift, for instance, uses a system of rewrite rules with a multi-stage approach. Algorithmic optimizations are first explored, followed by hardware-specific optimizations such as using shared memory and mapping parallelism. While the algorithmic exploration leads to correct transformed programs by construction, it is not necessarily true for the latter phase. Exploiting shared memory and mapping parallelism while ensuring correct synchronization is a delicate balancing act, and is hard to encode in a rewrite system. Currently, Lift relies on heuristics with ad-hoc mechanisms to check for correctness. Although this practical approach eventually produces high-performance code, it is not an ideal state of affairs. This paper proposes to extract parallelization constraints automatically from a functional IR and use a solver to identify valid rewriting. Using a convolutional neural network on a mobile GPU as a use case, this approach matches the performance of the ARM Compute Library GEMM convolution and the TVM-generated kernel consuming between 2.7x and 3.6x less memory on average. Furthermore, a speedup of 12x is achieved over the ARM Compute Library direct convolution implementation.
WOODS: Benchmarks for Out-of-Distribution Generalization in Time Series Tasks
Jean-Christophe Gagnon-Audet
Kartik Ahuja
Mohammad-Javad Darvishi-Bayazi
Cross-ethnicity/race generalization failure of behavioral prediction from resting-state functional connectivity
Jingwei Li
Jianzhong Chen
Angela Tam
Leon Qi
Leon Qi Rong Ooi
Avram J. Holmes
Tian Ge
Kaustubh R. Patil
Mbemba Jabbi
Simon B. Eickhoff
B.T. Thomas Yeo
Sarah Genon
Algorithmic biases that favor majority populations pose a key challenge to the application of machine learning for precision medicine. Here,… (see more) we assessed such bias in prediction models of behavioral phenotypes from brain functional magnetic resonance imaging. We examined the prediction bias using two independent datasets (preadolescent versus adult) of mixed ethnic/racial composition. When predictive models were trained on data dominated by white Americans (WA), out-of-sample prediction errors were generally higher for African Americans (AA) than for WA. This bias toward WA corresponds to more WA-like brain-behavior association patterns learned by the models. When models were trained on AA only, compared to training only on WA or an equal number of AA and WA participants, AA prediction accuracy improved but stayed below that for WA. Overall, the results point to the need for caution and further research regarding the application of current brain-behavior prediction models in minority populations.
A connectomics-based taxonomy of mammals
Laura E. Suárez
Yossi Yovel
Martijn P. van den Heuvel
Olaf Sporns
Yaniv Assaf
Bratislav Mišić
A connectomics-based taxonomy of mammals
Laura E. Suárez
Yossi Yovel
M. P. van den Heuvel
Olaf Sporns
Yaniv Assaf
Bratislav Mišić
Mammalian taxonomies are conventionally defined by morphological traits and genetics. How species differ in terms of neural circuits and whe… (see more)ther inter-species differences in neural circuit organization conform to these taxonomies is unknown. The main obstacle for the comparison of neural architectures have been differences in network reconstruction techniques, yielding species-specific connectomes that are not directly comparable to one another. Here we comprehensively chart connectome organization across the mammalian phylogenetic spectrum using a common reconstruction protocol. We analyze the mammalian MRI (MaMI) data set, a database that encompasses high-resolution ex vivo structural and diffusion magnetic resonance imaging (MRI) scans of 124 species across 12 taxonomic orders and 5 superorders, collected using a single protocol on a single scanner. We assess similarity between species connectomes using two methods: similarity of Laplacian eigenspectra and similarity of multiscale topological features. We find greater inter-species similarities among species within the same taxonomic order, suggesting the connectome organization recapitulates traditional taxonomies defined by morphology and genetics. While all connectomes retain hallmark global features and relative proportions of connection classes, inter-species variation is driven by local regional connectivity profiles. By encoding connectomes into a common frame of reference, these findings establish a foundation for investigating how neural circuits change over phylogeny, forging a link from genes to circuits to behaviour.
Kubric: A scalable dataset generator
Klaus Greff
Francois Belletti
Lucas Beyer
Carl Doersch
Yilun Du
Daniel Duckworth
David J. Fleet
Dan Gnanapragasam
Florian Golemo
Charles Herrmann
Thomas N. Kipf
Abhijit Kundu
Dmitry Lagun
Issam Hadj Laradji
Hsueh-Ti Liu
H. Meyer
Yishu Miao
Cengiz Oztireli
Etienne Pot … (see 14 more)
Noha Radwan
Daniel Rebain
Sara Sabour
Mehdi S. M. Sajjadi
Matan Sela
Vincent Sitzmann
Austin Stone
Deqing Sun
Suhani Vora
Ziyu Wang
Tianhao Wu
Kwang Moo Yi
Fangcheng Zhong
Andrea Tagliasacchi
Data is the driving force of machine learning, with the amount and quality of training data often being more important for the performance o… (see more)f a system than architecture and training details. But collecting, processing and annotating real data at scale is difficult, expensive, and frequently raises additional privacy, fairness and legal concerns. Synthetic data is a powerful tool with the potential to address these shortcomings: 1) it is cheap 2) supports rich ground-truth annotations 3) offers full control over data and 4) can circumvent or mitigate problems regarding bias, privacy and licensing. Unfortunately, software tools for effective data generation are less mature than those for architecture design and training, which leads to fragmented generation efforts. To address these problems we introduce Kubric, an open-source Python framework that interfaces with PyBullet and Blender to generate photo-realistic scenes, with rich annotations, and seamlessly scales to large jobs distributed over thousands of machines, and generating TBs of data. We demonstrate the effectiveness of Kubric by presenting a series of 13 different generated datasets for tasks ranging from studying 3D NeRF models to optical flow estimation. We release Kubric, the used assets, all of the generation code, as well as the rendered datasets for reuse and modification.
Misinterpreting the horseshoe effect in neuroscience
Timothée Proix
Tomislav Milekovic
Monocular Robot Navigation with Self-Supervised Pretrained Vision Transformers
Miguel Saavedra-Ruiz
Sacha Morin
In this work, we consider the problem of learning a perception model for monocular robot navigation using few annotated images. Using a Visi… (see more)on Transformer (ViT) pretrained with a label-free self-supervised method, we successfully train a coarse image segmentation model for the Duckietown environment using 70 training images. Our model performs coarse image segmentation at the
Feeding What You Need by Understanding What You Learned
Xiaoqiang Wang
Fangli Xu
Bowei Long
Siliang Tang
Lingfei Wu
A New Era: Intelligent Tutoring Systems Will Transform Online Learning for Millions
Francois St-Hilaire
Dung D. Vu
Antoine Frau
Nathan J. Burns
Farid Faraji
Joseph Potochny
Stephane Robert
Arnaud Roussel
Selene Zheng
Taylor Glazier
Junfel Vincent Romano
Robert Belfer
Muhammad Shayan
Ariella Smofsky
Tommy Delarosbil
Seulmin Ahn
Simon Eden-Walker
Kritika Sony
Ansona Onyi Ching
Sabina Elkins … (see 11 more)
A. Stepanyan
Adela Matajova
Victor Chen
Hossein Sahraei
Robert Larson
N. Markova
Andrew Barkett
Iulian V. Serban
Ekaterina Kochmar
Application of AI in community based primary health care: Systematic review and critical appraisal
Patrick Archambault
Hervé Tchala Vignon Zomahoun
Sam Chandavong
Marie-Pierre Gagnon
Sabrina M. Wong
Gauri Sharma
Lyse Langlois
Nathalie Rheault
Yves Couturier
Jean Légaré