Publications

Single-cell multi-omic topic embedding reveals cell-type-specific and COVID-19 severity-related immune signatures
Manqi Zhou
Hao Zhang
Zilong Bai
Dylan Mann-Krzisnik
Fei Wang
The advent of single-cell multi-omics sequencing technology makes it possible for re-searchers to leverage multiple modalities for individua… (see more)l cells and explore cell heterogeneity. However, the high dimensional, discrete, and sparse nature of the data make the downstream analysis particularly challenging. Most of the existing computational methods for single-cell data analysis are either limited to single modality or lack flexibility and interpretability. In this study, we propose an interpretable deep learning method called multi-omic embedded topic model (moETM) to effectively perform integrative analysis of high-dimensional single-cell multimodal data. moETM integrates multiple omics data via a product-of-experts in the encoder for efficient variational inference and then employs multiple linear decoders to learn the multi-omic signatures of the gene regulatory programs. Through comprehensive experiments on public single-cell transcriptome and chromatin accessibility data (i.e., scRNA+scATAC), as well as scRNA and proteomic data (i.e., CITE-seq), moETM demonstrates superior performance compared with six state-of-the-art single-cell data analysis methods on seven publicly available datasets. By applying moETM to the scRNA+scATAC data in human bone marrow mononuclear cells (BMMCs), we identified sequence motifs corresponding to the transcription factors that regulate immune gene signatures. Applying moETM analysis to CITE-seq data from the COVID-19 patients revealed not only known immune cell-type-specific signatures but also composite multi-omic biomarkers of critical conditions due to COVID-19, thus providing insights from both biological and clinical perspectives.
Technical Note—Risk-Averse Regret Minimization in Multistage Stochastic Programs
Mehran Poursoltani
Angelos Georghiou
Leveraging the Third Dimension in Contrastive Learning
Sumukh K Aithal
Anirudh Goyal
Alex Lamb
Michael Curtis Mozer
Self-Supervised Learning (SSL) methods operate on unlabeled data to learn robust representations useful for downstream tasks. Most SSL metho… (see more)ds rely on augmentations obtained by transforming the 2D image pixel map. These augmentations ignore the fact that biological vision takes place in an immersive three-dimensional, temporally contiguous environment, and that low-level biological vision relies heavily on depth cues. Using a signal provided by a pretrained state-of-the-art monocular RGB-to-depth model (the \emph{Depth Prediction Transformer}, Ranftl et al., 2021), we explore two distinct approaches to incorporating depth signals into the SSL framework. First, we evaluate contrastive learning using an RGB+depth input representation. Second, we use the depth signal to generate novel views from slightly different camera positions, thereby producing a 3D augmentation for contrastive learning. We evaluate these two approaches on three different SSL methods -- BYOL, SimSiam, and SwAV -- using ImageNette (10 class subset of ImageNet), ImageNet-100 and ImageNet-1k datasets. We find that both approaches to incorporating depth signals improve the robustness and generalization of the baseline SSL methods, though the first approach (with depth-channel concatenation) is superior. For instance, BYOL with the additional depth channel leads to an increase in downstream classification accuracy from 85.3\% to 88.0\% on ImageNette and 84.1\% to 87.0\% on ImageNet-C.
Accompanying patients in clinical oncology teams: Reported activities and perceived effects
Marie-Pascale Pomey
Jesseca Paquette
Monica Iliescu‐Nelea
Cécile Vialaron
Rim Mourad
Karine Bouchard
Louise Normandin
Marie‐Andrée Côté
Mado Desforges
Pénélope Pomey‐Carpentier
Israël Fortin
Isabelle Ganache
Zeev Rosberger
Danielle Charpentier
Lynda Bélanger
Michel Dorval
Djahanchah Philip Ghadiri
Mélanie Lavoie-Tremblay
Antoine Boivin … (see 4 more)
Jean-François Pelletier
Nicolas Fernandez
Alain M. Danino
Michèle de Guise
Since 2018, four establishments in Quebec, Canada, have decided to implement the PAROLE‐Onco programme, which introduced accompanying pati… (see more)ents (APs) in healthcare teams to improve the experience of cancer patients. APs are patient advisors who have had a cancer treatment experience and who conduct consultations to complement the service offered by providing emotional, informational and educational support to patients undergoing treatments (e.g., radiotherapy, chemotherapy, surgery), mostly for breast cancer. We aimed to explore the evolution of APs' perspectives regarding their activities within the clinical oncology teams as well as the perceived effects of their intervention with patients, the clinical team and themselves.
Semi-Supervised Object Detection for Agriculture
Gabriel Tseng
Krisztina Sinkovics
Tom Watsham
Thomas C. Walters
Enhancing Medical Image Segmentation with TransCeption: A Multi-Scale Feature Fusion Approach
Reza Azad
Yiwei Jia
Ehsan Khodapanah Aghdam
Dorit Merhof
While CNN-based methods have been the cornerstone of medical image segmentation due to their promising performance and robustness, they suff… (see more)er from limitations in capturing long-range dependencies. Transformer-based approaches are currently prevailing since they enlarge the reception field to model global contextual correlation. To further extract rich representations, some extensions of the U-Net employ multi-scale feature extraction and fusion modules and obtain improved performance. Inspired by this idea, we propose TransCeption for medical image segmentation, a pure transformer-based U-shape network featured by incorporating the inception-like module into the encoder and adopting a contextual bridge for better feature fusion. The design proposed in this work is based on three core principles: (1) The patch merging module in the encoder is redesigned with ResInception Patch Merging (RIPM). Multi-branch transformer (MB transformer) adopts the same number of branches as the outputs of RIPM. Combining the two modules enables the model to capture a multi-scale representation within a single stage. (2) We construct an Intra-stage Feature Fusion (IFF) module following the MB transformer to enhance the aggregation of feature maps from all the branches and particularly focus on the interaction between the different channels of all the scales. (3) In contrast to a bridge that only contains token-wise self-attention, we propose a Dual Transformer Bridge that also includes channel-wise self-attention to exploit correlations between scales at different stages from a dual perspective. Extensive experiments on multi-organ and skin lesion segmentation tasks present the superior performance of TransCeption compared to previous work. The code is publicly available at https://github.com/mindflow-institue/TransCeption.
Explainable Machine Learning Model to Predict COVID-19 Severity Among Older Adults in the Province of Quebec.
Charlene H Chu
Roland M. Grad
Mark Karanofsky
Mylene Arsenault
Charlene Esteban Ronquillo
Isabelle Vedel
K. McGilton
Machelle Wilchesky
Context: Patients over the age of 65 years are more likely to experience higher severity and mortality rates than other populations from COV… (see more)ID-19. Clinicians need assistance in supporting their decisions regarding the management of these patients. Artificial Intelligence (AI) can help with this regard. However, the lack of explainability-defined as "the ability to understand and evaluate the internal mechanism of the algorithm/computational process in human terms"-of AI is one of the major challenges to its application in health care. We know little about application of explainable AI (XAI) in health care. Objective: In this study, we aimed to evaluate the feasibility of the development of explainable machine learning models to predict COVID-19 severity among older adults. Design: Quantitative machine learning methods. Setting: Long-term care facilities within the province of Quebec. Participants: Patients 65 years and older presented to the hospitals who had a positive polymerase chain reaction test for COVID-19. Intervention: We used XAI-specific methods (e.g., EBM), machine learning methods (i.e., random forest, deep forest, and XGBoost), as well as explainable approaches such as LIME, SHAP, PIMP, and anchor with the mentioned machine learning methods. Outcome measures: Classification accuracy and area under the receiver operating characteristic curve (AUC). Results: The age distribution of the patients (n=986, 54.6% male) was 84.5□19.5 years. The best-performing models (and their performance) were as follows. Deep forest using XAI agnostic methods LIME (97.36% AUC, 91.65 ACC), Anchor (97.36% AUC, 91.65 ACC), and PIMP (96.93% AUC, 91.65 ACC). We found alignment with the identified reasoning of our models' predictions and clinical studies' findings-about the correlation of different variables such as diabetes and dementia, and the severity of COVID-19 in this population. Conclusions: The use of explainable machine learning models, to predict the severity of COVID-19 among older adults is feasible. We obtained a high-performance level as well as explainability in the prediction of COVID-19 severity in this population. Further studies are required to integrate these models into a decision support system to facilitate the management of diseases such as COVID-19 for (primary) health care providers and evaluate their usability among them.
Regeneration Learning: A Learning Paradigm for Data Generation
Xu Tan
Tao Qin
Jiang Bian
Tie-Yan Liu
Robustness and Sample Complexity of Model-Based MARL for General-Sum Markov Games
Jayakumar Subramanian
Amit Sinha
Disentangling poststroke cognitive deficits and their neuroanatomical correlates through combined multivariable and multioutcome lesion‐symptom mapping
Nick A. Weaver
Muhammad Hasnain Mamdani
Jae‐Sung Lim
J. Matthijs Biesbroek
Geert Jan Biessels
Irene M. C. Huenges Wajer
Yeonwook Kang
Beom Joon Kim
Byung‐Chul Lee
Keon‐Joo Lee
Kyung‐Ho Yu
Hee-Joon Bae
Hugo J. Kuijf
lo-fi: distributed fine-tuning without communication
Mitchell Wortsman
Suchin Gururangan
Shen Li
Ali Farhadi
Ludwig Schmidt
Ari S. Morcos
When fine-tuning large neural networks, it is common to use multiple nodes and to communicate gradients at each optimization step. By contra… (see more)st, we investigate completely local fine-tuning, which we refer to as lo-fi. During lo-fi, each node fine-tunes independently without any communication. Then, the weights are averaged across nodes at the conclusion of fine-tuning. When fine-tuning DeiT-base and DeiT-large on ImageNet, this procedure matches accuracy in-distribution and improves accuracy under distribution shift compared to the baseline, which observes the same amount of data but communicates gradients at each step. We also observe that lo-fi matches the baseline's performance when fine-tuning OPT language models (up to 1.3B parameters) on Common Crawl. By removing the communication requirement, lo-fi reduces resource barriers for fine-tuning large models and enables fine-tuning in settings with prohibitive communication cost.
A Framework for Obtaining Accurate Posteriors of Strong Gravitational Lensing Parameters with Flexible Priors and Implicit Likelihoods Using Density Estimation
Ronan Legin
Benjamin Wandelt
We report the application of implicit likelihood inference to the prediction of the macroparameters of strong lensing systems with neural ne… (see more)tworks. This allows us to perform deep-learning analysis of lensing systems within a well-defined Bayesian statistical framework to explicitly impose desired priors on lensing variables, obtain accurate posteriors, and guarantee convergence to the optimal posterior in the limit of perfect performance. We train neural networks to perform a regression task to produce point estimates of lensing parameters. We then interpret these estimates as compressed statistics in our inference setup and model their likelihood function using mixture density networks. We compare our results with those of approximate Bayesian neural networks, discuss their significance, and point to future directions. Based on a test set of 100,000 strong lensing simulations, our amortized model produces accurate posteriors for any arbitrary confidence interval, with a maximum percentage deviation of 1.4% at the 21.8% confidence level, without the need for any added calibration procedure. In total, inferring 100,000 different posteriors takes a day on a single GPU, showing that the method scales well to the thousands of lenses expected to be discovered by upcoming sky surveys.