We use cookies to analyze the browsing and usage of our website and to personalize your experience. You can disable these technologies at any time, but this may limit certain functionalities of the site. Read our Privacy Policy for more information.
Setting cookies
You can enable and disable the types of cookies you wish to accept. However certain choices you make could affect the services offered on our sites (e.g. suggestions, personalised ads, etc.).
Essential cookies
These cookies are necessary for the operation of the site and cannot be deactivated. (Still active)
Analytics cookies
Do you accept the use of cookies to measure the audience of our sites?
Multimedia Player
Do you accept the use of cookies to display and allow you to watch the video content hosted by our partners (YouTube, etc.)?
Publications
Invariant representation driven neural classifier for anti-QCD jet tagging
Causal learning has long concerned itself with the accurate recovery of underlying causal mechanisms. Such causal modelling enables better e… (see more)xplanations of out-of-distribution data. Prior works on causal learning assume that the high-level causal variables are given. However, in machine learning tasks, one often operates on low-level data like image pixels or high-dimensional vectors. In such settings, the entire Structural Causal Model (SCM) -- structure, parameters, \textit{and} high-level causal variables -- is unobserved and needs to be learnt from low-level data. We treat this problem as Bayesian inference of the latent SCM, given low-level data. For linear Gaussian additive noise SCMs, we present a tractable approximate inference method which performs joint inference over the causal variables, structure and parameters of the latent SCM from random, known interventions. Experiments are performed on synthetic datasets and a causally generated image dataset to demonstrate the efficacy of our approach. We also perform image generation from unseen interventions, thereby verifying out of distribution generalization for the proposed causal model.
We address the problem of enabling quadrupedal robots to perform precise shooting skills in the real world using reinforcement learning. Dev… (see more)eloping algorithms to enable a legged robot to shoot a soccer ball to a given target is a challenging problem that combines robot motion control and planning into one task. To solve this problem, we need to consider the dynamics limitation and motion stability during the control of a dynamic legged robot. Moreover, we need to consider motion planning to shoot the hard-to-model deformable ball rolling on the ground with uncertain friction to a desired location. In this paper, we propose a hierarchical framework that leverages deep reinforcement learning to train (a) a robust motion control policy that can track arbitrary motions and (b) a planning policy to decide the desired kicking motion to shoot a soccer ball to a target. We deploy the proposed framework on an A1 quadrupedal robot and enable it to accurately shoot the ball to random targets in the real world.
2022-10-23
2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (published)
Context: Mental healthcare systems are facing an ever-growing demand for appropriate assessment and intervention. Unfortunately, services ar… (see more)e often centralized, overloaded, and inaccessible, resulting in greater institutional and social inequities. Therefore, there is an urgent need to establish easy-to-implement methods for early diagnosis and personalized follow-up. In recent years, serious games have started to offer such a clinical tool at scale. Problem: There are critical challenges to the development of secure and inclusive serious games for clinical research. First, the quality of the data and features analyzed must be well defined early in the research process in order to draw meaningful conclusions. Second, algorithms must be aligned with the purpose of the research while not perpetuating bias. Finally, the technologies used must be widely accessible and sufficiently engaging for users. Focus of the paper: To tackle these challenges, we designed a participatory project that combines three innovative technologies: Mixed Reality, Serious Gaming, and Machine Learning. We analyze preliminary data with a focus on the identification of the players and the measurement of classical biases such as sex and environment of data collection. Method: We co-developed with patients and their families, as well as clinicians, a serious game in mixed reality specifically designed for evaluation and therapeutic intervention in autism. Preliminary data were collected from neurotypical individuals with a mixed reality headset. Relevant behavioral features were extracted and used to train several classification algorithms for player identification. Results: We were able to classify players above chance with slightly higher accuracy of neural networks. Interestingly, the accuracy was significantly higher when players were separated by sex. Furthermore, the uncontrolled condition showed better levels of accuracy than the controlled condition. This could mean that the data are richer when the player interacts freely with the game. Our proof of concept cannot exclude the possibility that this last result is linked to the experimental setup. Future development will clarify this point with a larger sample size and the use of deep learning algorithms. Implications: We show that serious games in mixed reality can be a valuable tool to collect clinical data. Our preliminary results highlight important biases to consider for future studies, especially for the sex and context of data collection. Next, we will evaluate the usability, accessibility, and tolerability of the device and the game in autistic children. In addition, we will evaluate the psychometric properties of the serious game, especially for patient stratification. This project aims to develop a platform for the diagnosis and therapy of autism, which can eventually be easily extended to other conditions and settings such as the evaluation of depression or stroke rehabilitation. Such a tool can offer novel possibilities for the study, evaluation, and treatment of mental conditions at scale, and thus ease the burden on healthcare systems.
Modularity and compositionality are promising inductive biases for addressing longstanding problems in machine learning such as better syste… (see more)matic generalization, as well as better transfer and lower forgetting in the context of continual learning. Here we study how attention-based module selection can help achieve composi-tonal modularity – i.e. decomposition of tasks into meaningful sub-tasks which are tackled by independent architectural entities that we call modules. These sub-tasks must be reusable and the system should be able to learn them without additional supervision. We design a simple experimental setup in which the model is trained to solve mathematical equations with multiple math operations applied sequentially. We study different attention-based module selection strategies, inspired by the principles introduced in the recent literature. We evaluate the method’s ability to learn modules that can recover the underling sub-tasks (operation) used for data generation, as well as the ability to generalize compositionally. We find that meaningful module selection (i.e. routing) is the key to compositional generalization. Further, without access to the privileged information about which part of the input should be used for module selection, the routing component performs poorly for samples that are compositionally out of training distribution. We find that the the main reason for this lies in the routing component, since many of the tested methods perform well OOD if we report the performance of the best performing path at test time. Additionally, we study the role of the number of primitives, the number of training points and bottlenecks for modular specialization.
The rise in screen time and the isolation brought by the different containment measures implemented during the COVID-19 pandemic have led to… (see more) an alarming increase in cases of online grooming. Online grooming is defined as all the strategies used by predators to lure children into sexual exploitation. Previous attempts made in industry and academia on the detection of grooming rely on accessing and monitoring users’ private conversations through the training of a model centrally or by sending personal conversations to a global server. We introduce a first, privacy-preserving, cross-device, federated learning framework for the early detection of sexual predators, which aims to ensure a safe online environment for children while respecting their privacy.
We develop the sparse VAE for unsupervised representation learning on high-dimensional data. The sparse VAE learns a set of latent factors … (see more)(representations) which summarize the associations in the observed data features. The underlying model is sparse in that each observed feature (i.e. each dimension of the data) depends on a small subset of the latent factors. As examples, in ratings data each movie is only described by a few genres; in text data each word is only applicable to a few topics; in genomics, each gene is active in only a few biological processes. We prove such sparse deep generative models are identifiable: with infinite data, the true model parameters can be learned. (In contrast, most deep generative models are not identifiable.) We empirically study the sparse VAE with both simulated and real data. We find that it recovers meaningful latent factors and has smaller heldout reconstruction error than related methods.
Impact of a vaccine passport on first-dose COVID-19 vaccine coverage by age and area-level social determinants in the Canadian provinces of Quebec and Ontario: an interrupted time series analysis
Background: In Canada, all provinces implemented vaccine passports in 2021 to increase vaccine uptake and reduce transmission in non-essenti… (see more)al indoor spaces. We evaluate the impact of vaccine passport policies on first-dose COVID-19 vaccination coverage by age, area-level income and proportion racialized. Methods: We performed interrupted time-series analyses using vaccine registry data linked to census information in Quebec and Ontario (20.5 million people [≥]12 years; unit of analysis: dissemination area). We fit negative binomial regressions to weekly first-dose vaccination, using a natural spline to capture pre-announcement trends, adjusting for baseline vaccination coverage (start: July 3rd; end: October 23rd Quebec, November 13th Ontario). We obtain counterfactual vaccination rates and coverage, and estimated vaccine passports' impact on vaccination coverage (absolute) and new vaccinations (relative). Results: In both provinces, pre-announcement first-dose vaccination coverage was 82% ([≥]12 years). The announcement resulted in estimated increases in vaccination coverage of 0.9 percentage points (p.p.;95% CI: 0.4-1.2) in Quebec and 0.7 p.p. (95% CI: 0.5-0.8) in Ontario. In relative terms, these increases correspond to 23% (95% CI: 10-36%) and 19% (95% CI: 15-22%) more vaccinations. The impact was larger among people aged 12-39 (1-2 p.p.). There was little variability in the absolute impact by area-level income or proportion racialized in either province. Conclusions: In the context of high baseline vaccine coverage across two provinces, the announcement of vaccine passports led to a small impact on first-dose coverage, with little impact on reducing economic and racial inequities in vaccine coverage. Findings suggest the need for other policies to further increase vaccination coverage among lower-income and more racialized neighbourhoods and communities.
Throughout the SARS-CoV-2 pandemic, several variants of concern (VOC) have been identified, many of which share recurrent mutations in the s… (see more)pike protein’s receptor binding domain (RBD). This region coincides with known epitopes and can therefore have an impact on immune escape. Protracted infections in immunosuppressed patients have been hypothesized to lead to an enrichment of such mutations and therefore drive evolution towards VOCs. Here, we show that immunosuppressed patients with hematologic cancers develop distinct populations with immune escape mutations throughout the course of their infection. Notably, by investigating the co-occurrence of substitutions on individual sequencing reads in the RBD, we found quasispecies harboring mutations that confer resistance to known monoclonal antibodies (mAbs) such as S:E484K and S:E484A. Furthermore, we provide the first evidence for a viral reservoir based on intra-host phylogenetics. Our results on viral reservoirs can shed light on protracted infections interspersed with periods where the virus is undetectable as well as an alternative explanation for some long-COVID cases. Our findings also highlight that protracted infections should be treated with combination therapies rather than by a single mAbs to clear pre-existing resistant mutations.
Decision-making AI agents are often faced with two important challenges: the depth of the planning horizon, and the branching factor due to … (see more)having many choices. Hierarchical reinforcement learning methods aim to solve the first problem, by providing shortcuts that skip over multiple time steps. To cope with the breadth, it is desirable to restrict the agent's attention at each step to a reasonable number of possible choices. The concept of affordances (Gibson, 1977) suggests that only certain actions are feasible in certain states. In this work, we first characterize "affordances" as a "hard" attention mechanism that strictly limits the available choices of temporally extended options. We then investigate the role of hard versus soft attention in training data collection, abstract value learning in long-horizon tasks, and handling a growing number of choices. To this end, we present an online, model-free algorithm to learn affordances that can be used to further learn subgoal options. Finally, we identify and empirically demonstrate the settings in which the "paradox of choice" arises, i.e. when having fewer but more meaningful choices improves the learning speed and performance of a reinforcement learning agent.