Publications

1351. Predictors of Loss of Infectivity Among Healthcare Workers with Primary and Recurrent SARS-CoV-2 infection: An Observational Cohort Study
Stefka Dzieciolowska
Yves Longtin
Hugues Charest
Tonya Roy
Judith Fafard
Inès Levade
Jean Longtin
Leighanne Parkes
Jasmin Villeneuve
Patrice Savard
J. Corbeil
Gaston De Serres
Abstract Background Factors associated with loss of infectivity in healthcare workers (HCWs) with COVID-19 are poorly understood. Understand… (voir plus)ing predictive factors could help optimize return-to-work criteria and minimize absenteeism. Methods Prospective observational cohort study of HCWs with COVID-19 conducted between Feb 20 2022 and March 6 2023 in 20 institutions in Montreal, Canada, with clinical/laboratory follow-up on Day 5, 7 and 10 of infection. Infectivity was determined by viral culture (Vero E6 cells) on nasopharyngeal swabs. Predictors of loss of infectivity were investigated by univariate and multivariate logistic regression. Results Overall, 121 participants (79.3% female, mean age 40 years) were recruited. Most (n=107, 88.4%) had received ≥3 vaccines and 20 (16.5%) had a history of prior COVID-19. The proportion of HCWs with a positive viral culture decreased from 71.9% on day 5 of infection to 18.2% on day 10. The proportion of HCWs with a positive RT-PCR decreased from 93.3% (112/120) on day 5 (median Ct value, 23.4 [IQR, 20.6-27.9]) to 61.2% (74/120) on day 10 (median Ct value, 32.5 [IQR, 28.5 to undetectable]). Rapid antigen detection test (RADT) positivity decreased from 81.5% on day 5 to 34.2% on day 10. Participants with recurrent COVID-19 had lower likelihood of infectivity at each visit (OR on day 5, 0.14; 95% CI 0.05-0.40; p 0.001; OR on day 7, 0.04; 95% CI, 0.01-0.33; p=0.003) and none were infective on day 10 (p=0.02). At each visit, recurrent cases had higher median RT-PCR Ct values than primary infections (p 0.03) and were more likely to have a negative RADT result (p 0.01). By multivariate analysis, ongoing infectivity was associated with a RT-PCR Ct value 23 (adjusted OR [aOR] on day 5, 22.75; p 0.001; aOR on Day 7, 182.30; p 0.001; and aOR on Day 10; 24.71; p=0.02). A history of previous COVID-19 was associated with a lower probability of infectivity on Day 5 (aOR, 0.005; p=0.003). By contrast, symptom improvement (including fever) and RADT result were not independent predictors of loss of infectivity. Conclusion A lower RT-PCR Ct value is associated with ongoing infectivity, whereas COVID-19 reinfection is a predictor of loss of infectivity. These findings could help optimize return-to-work algorithms. Disclosures All Authors: No reported disclosures
Author Correction: 30×30 biodiversity gains rely on national coordination
Isaac Eckert
Andrea Brown
Dominique Caron
Federico Riva
Exploring the multidimensional nature of repetitive and restricted behaviors and interests (RRBI) in autism: neuroanatomical correlates and clinical implications
Aline Lefebvre
Nicolas Traut
Amandine Pedoux
Anna Maruani
Anita Beggiato
Monique Elmaleh
David Germanaud
Anouck Amestoy
Myriam Ly‐Le Moal
Christopher H. Chatham
Lorraine Murtagh
Manuel Bouvard
Marianne Alisson
Marion Leboyer
Thomas Bourgeron
Roberto Toro
Clara A. Moreau
Richard Delorme
scGeneRythm: Using Neural Networks and Fourier Transformation to Cluster Genes by Time-Frequency Patterns in Single-Cell Data
Yiming Jia
Hao Wu
The search for the lost attractor
Mario Pasquato
Syphax Haddad
Pierfrancesco Di Cintio
No'e Dia
Mircea Petrache
Ugo Niccolo Di Carlo
Alessandro A. Trani
Hessian Aware Low-Rank Perturbation for Order-Robust Continual Learning
Jiaqi Li
Rui Wang
Yuanhao Lai
Charles X. Ling
Shichun Yang
Boyu Wang
Fan Zhou
Continual learning aims to learn a series of tasks sequentially without forgetting the knowledge acquired from the previous ones. In this wo… (voir plus)rk, we propose the Hessian Aware Low-Rank Perturbation algorithm for continual learning. By modeling the parameter transitions along the sequential tasks with the weight matrix transformation, we propose to apply the low-rank approximation on the task-adaptive parameters in each layer of the neural networks. Specifically, we theoretically demonstrate the quantitative relationship between the Hessian and the proposed low-rank approximation. The approximation ranks are then globally determined according to the marginal increment of the empirical loss estimated by the layer-specific gradient and low-rank approximation error. Furthermore, we control the model capacity by pruning less important parameters to diminish the parameter growth. We conduct extensive experiments on various benchmarks, including a dataset with large-scale tasks, and compare our method against some recent state-of-the-art methods to demonstrate the effectiveness and scalability of our proposed method. Empirical results show that our method performs better on different benchmarks, especially in achieving task order robustness and handling the forgetting issue. The source code is at https://github.com/lijiaqi/HALRP.
Low Compute Unlearning via Sparse Representations
Ashish Malik
Michael Curtis Mozer
Sanjeev Arora
Machine unlearning, which involves erasing knowledge about a \emph{forget set} from a trained model, can prove to be costly and infeasible … (voir plus)using existing techniques. We propose a low-compute unlearning technique based on a discrete representational bottleneck. We show that the proposed technique efficiently unlearns the forget set and incurs negligible damage to the model's performance on the rest of the dataset. We evaluate the proposed technique on the problem of class unlearning using four datasets: CIFAR-10, CIFAR-100, LACUNA-100 and ImageNet-1k. We compare the proposed technique to SCRUB, a state-of-the-art approach which uses knowledge distillation for unlearning. Across all four datasets, the proposed technique performs as well as, if not better than SCRUB while incurring almost no computational cost.
Gaining Biological Insights through Supervised Data Visualization
Jake S. Rhodes
Marc Girard
Catherine Larochelle
Boaz Lahav
Elsa Brunet-Ratnasingham
Amélie Pagliuzza
Lorie Marchitto
Wei Zhang
Adele Cutler
Francois Grand’Maison
Anhong Zhou
Andrés Finzi
Nicolas Chomont
Daniel E. Kaufmann
Alexandre Prat
Kevin R. Moon
Dimensionality reduction-based data visualization is pivotal in comprehending complex biological data. The most common methods, such as PHAT… (voir plus)E, t-SNE, and UMAP, are unsupervised and therefore reflect the dominant structure in the data, which may be independent of expert-provided labels. Here we introduce a supervised data visualization method called RF-PHATE, which integrates expert knowledge for further exploration of the data. RF-PHATE leverages random forests to capture intricate featurelabel relationships. Extracting information from the forest, RF-PHATE generates low-dimensional visualizations that highlight relevant data relationships while disregarding extraneous features. This approach scales to large datasets and applies to classification and regression. We illustrate RF-PHATE’s prowess through three case studies. In a multiple sclerosis study using longitudinal clinical and imaging data, RF-PHATE unveils a sub-group of patients with non-benign relapsingremitting Multiple Sclerosis, demonstrating its aptitude for time-series data. In the context of Raman spectral data, RF-PHATE effectively showcases the impact of antioxidants on diesel exhaust-exposed lung cells, highlighting its proficiency in noisy environments. Furthermore, RF-PHATE aligns established geometric structures with COVID-19 patient outcomes, enriching interpretability in a hierarchical manner. RF-PHATE bridges expert insights and visualizations, promising knowledge generation. Its adaptability, scalability, and noise tolerance underscore its potential for widespread adoption.
Mitigating Shortcut Learning with Diffusion Counterfactuals and Diverse Ensembles
Alexander Rubinstein
Damien Teney
Seong Joon Oh
Armand Mihai Nicolicioiu
Spurious correlations in the data, where multiple cues are predictive of the target labels, often lead to a phenomenon known as shortcut lea… (voir plus)rning, where a model relies on erroneous, easy-to-learn cues while ignoring reliable ones. In this work, we propose
Propositional Logics for the Lawvere Quantale
Giorgio Bacci
Radu Mardare
Gordon Plotkin
scSniper: Single-cell Deep Neural Network-based Identification of Prominent Biomarkers
Mingyang Li
Yanshuo Chen
Unveiling the Impact of Arsenic Toxicity on Immune Cells in Atherosclerotic Plaques: Insights from Single-Cell Multi-Omics Profiling
Kiran Makhani
Xiuhui Yang
France Dierick
Nivetha Subramaniam
Natascha Gagnon
Talin Ebrahimian
Hao Wu
Koren K. Mann