TRAIL : IA responsable pour les professionnels et les leaders
Apprenez à intégrer des pratique d'IA responsable dans votre organisation avec le programme TRAIL. Inscrivez-vous à la prochaine cohorte qui débutera le 15 avril.
Avantage IA : productivité dans la fonction publique
Apprenez à tirer parti de l’IA générative pour soutenir et améliorer votre productivité au travail. La prochaine cohorte se déroulera en ligne les 28 et 30 avril 2026.
Nous utilisons des témoins pour analyser le trafic et l’utilisation de notre site web, afin de personnaliser votre expérience. Vous pouvez désactiver ces technologies à tout moment, mais cela peut restreindre certaines fonctionnalités du site. Consultez notre Politique de protection de la vie privée pour en savoir plus.
Paramètre des cookies
Vous pouvez activer et désactiver les types de cookies que vous souhaitez accepter. Cependant certains choix que vous ferez pourraient affecter les services proposés sur nos sites (ex : suggestions, annonces personnalisées, etc.).
Cookies essentiels
Ces cookies sont nécessaires au fonctionnement du site et ne peuvent être désactivés. (Toujours actif)
Cookies analyse
Acceptez-vous l'utilisation de cookies pour mesurer l'audience de nos sites ?
Lecteur Multimédia
Acceptez-vous l'utilisation de cookies pour afficher et vous permettre de regarder les contenus vidéo hébergés par nos partenaires (YouTube, etc.) ?
Publications
GPU acceleration of finite state machine input execution: Improving scale and performance
Model‐based development is a popular development approach in which software is implemented and verified based on a model of the required s… (voir plus)ystem. Finite state machines (FSMs) are widely used as models for systems in several domains. Validating that a model accurately represents the required behaviour involves the generation and execution of a large number of input sequences, which is often an expensive and time‐consuming process. In this paper, we speed up the execution of input sequences for FSM validation, by leveraging the high degree of parallelism of modern graphics processing units (GPUs) for the automatic execution of FSM input sequences in parallel on the GPU threads. We expand our existing work by providing techniques that improve the performance and scalability of this approach. We conduct extensive empirical evaluation using 15 large FSMs from the networking domain and measure GPU speed‐up over a 16‐core CPU, taking into account total GPU time, which includes both data transfer and kernel execution time. We found that GPUs execute FSM input sequences up to 9.28× faster than a 16‐core CPU, with an average speed‐up of 4.53× across all subjects. Our optimizations achieve an average improvement over existing work of 58.95% for speed‐up and scalability to large FSMs with over 2K states and 500K transitions. We also found that techniques aimed at reducing the number of required input sequences for large FSMs with high density were ineffective when applied to all‐transition pair coverage, thus emphasizing the need for approaches like ours that speed up input execution.
2021-10-07
Software Testing, Verification and Reliability (publié)
Human drivers produce a vast amount of data which could, in principle, be used to improve autonomous driving systems. Unfortunately, seeming… (voir plus)ly straightforward approaches for creating end-to-end driving models that map sensor data directly into driving actions are problematic in terms of interpretability, and typically have significant difficulty dealing with spurious correlations. Alternatively, we propose to use this kind of action-based driving data for learning representations. Our experiments show that an affordance-based driving model pre-trained with this approach can leverage a relatively small amount of weakly annotated imagery and outperform pure end-to-end driving models, while being more interpretable. Further, we demonstrate how this strategy outperforms previous methods based on learning inverse dynamics models as well as other methods based on heavy human supervision (ImageNet).
2021-10-03
Proceedings of the 2020 Conference on Robot Learning (publié)
•Evaluate the robustness of an automated analysis pipeline for detecting SC atrophy.•Simulate spinal cord atrophy and scan-rescan variab… (voir plus)ility.•Fully automated analysis method available on an open access database.•Evaluation of sample size and inter/intra-subject variability for T1w and T2w images.
Evaluate the robustness of an automated analysis pipeline for detecting SC atrophy.
Simulate spinal cord atrophy and scan-rescan variability.
Fully automated analysis method available on an open access database.
Evaluation of sample size and inter/intra-subject variability for T1w and T2w images.
Explainability for machine learning models has gained considerable attention within the research community given the importance of deploying… (voir plus) more reliable machine-learning systems. In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction, providing details about the model's decision-making. Current methods tend to generate trivial counterfactuals about a model's decisions, as they often suggest to exaggerate or remove the presence of the attribute being classified. For the machine learning practitioner, these types of counterfactuals offer little value, since they provide no new information about undesired model or data biases. In this work, we identify the problem of trivial counterfactual generation and we propose DiVE to alleviate it. DiVE learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss to uncover multiple valuable explanations about the model's prediction. Further, we introduce a mechanism to prevent the model from producing trivial explanations. Experiments on CelebA and Synbols demonstrate that our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods. Code is available at https://github.com/ElementAI/beyond-trivial-explanations.
2021-09-30
2021 IEEE/CVF International Conference on Computer Vision (ICCV) (publié)
Evaluating Montréal’s harm reduction interventions for people who inject drugs: protocol for observational study and cost-effectiveness analysis
Dimitra Panagiotoglou
Michal Abrahamowicz
David L Buckeridge
J Jaime Caro
Eric Latimer
Mathieu Maheu-Giroux
Erin C Strumpf
The main harm reduction interventions for people who inject drugs (PWID) are supervised injection facilities, needle and syringe programmes … (voir plus)and opioid agonist treatment. Current evidence supporting their implementation and operation underestimates their usefulness by excluding skin, soft tissue and vascular infections (SSTVIs) and anoxic/toxicity-related brain injury from cost-effectiveness analyses (CEA). Our goal is to conduct a comprehensive CEA of harm reduction interventions in a setting with a large, dispersed, heterogeneous population of PWID, and include prevention of SSTVIs and anoxic/toxicity-related brain injury as measures of benefit in addition to HIV, hepatitis C and overdose morbidity and mortalities averted.
This protocol describes how we will develop an open, retrospective cohort of adult PWID living in Québec between 1 January 2009 and 31 December 2020 using administrative health record data. By complementing this data with non-linkable paramedic dispatch records, regional monthly needle and syringe dispensation counts and repeated cross-sectional biobehavioural surveys, we will estimate the hazards of occurrence and the impact of Montréal’s harm reduction interventions on the incidence of drug-use-related injuries, infections and deaths. We will synthesise results from our empirical analyses with published evidence to simulate infections and injuries in a hypothetical population of PWID in Montréal under different intervention scenarios including current levels of use and scale-up, and assess the cost-effectiveness of each intervention from the public healthcare payer’s perspective.
This study was approved by McGill University’s Institutional Review Board (Study Number: A08-E53-19B). We will work with community partners to disseminate results to the public and scientific community via scientific conferences, a publicly accessible report, op-ed articles and open access peer-reviewed journals.
Marine debris is severely threatening the marine lives and causing sustained pollution to the whole ecosystem. To prevent the wastes from ge… (voir plus)tting into the ocean, it is helpful to clean up the floating wastes in inland waters using the autonomous cleaning devices like unmanned surface vehicles. The cleaning efficiency relies on a high-accurate and robust object detection system. However, the small size of the target, the strong light reflection over water surface, and the reflection of other objects on bank-side all bring challenges to the vision-based object detection system. To promote the practical application for autonomous floating wastes cleaning, we present FloW†, the first dataset for floating waste detection in inland water areas. The dataset consists of an image sub-dataset FloW-Img and a multimodal sub-dataset FloW-RI which contains synchronized millimeter wave radar data and images. Accurate annotations for images and radar data are provided, supporting floating waste detection strategies based on image, radar data, and the fusion of two sensors. We perform several baseline experiments on our dataset, including vision-based and radar-based detection methods. The results show that, the detection accuracy is relatively low and floating waste detection still remains a challenging task.
2021-09-30
IEEE International Conference on Computer Vision (publié)
Inferring objects and their relationships from an image in the form of a scene graph is useful in many applications at the intersection of v… (voir plus)ision and language. We consider a challenging problem of compositional generalization that emerges in this task due to a long tail data distribution. Current scene graph generation models are trained on a tiny fraction of the distribution corresponding to the most frequent compositions, e.g. . However, test images might contain zero- and few-shot compositions of objects and relationships, e.g. . Despite each of the object categories and the predicate (e.g. 'on') being frequent in the training data, the models often fail to properly understand such unseen or rare compositions. To improve generalization, it is natural to attempt increasing the diversity of the training distribution. However, in the graph domain this is non-trivial. To that end, we propose a method to synthesize rare yet plausible scene graphs by perturbing real ones. We then propose and empirically study a model based on conditional generative adversarial networks (GANs) that allows us to generate visual features of perturbed scene graphs and learn from them in a joint fashion. When evaluated on the Visual Genome dataset, our approach yields marginal, but consistent improvements in zero- and few-shot metrics. We analyze the limitations of our approach indicating promising directions for future research.
2021-09-30
2021 IEEE/CVF International Conference on Computer Vision (ICCV) (publié)
Normalizing automatic spinal cord cross-sectional area measures
S. Bédard
J. Cohen-Adad
Spinal cord cross-sectional area (CSA) is a relevant biomarker to assess spinal cord atrophy in various neurodegenerative diseases. However,… (voir plus) the considerable inter-subject variability among healthy participants currently limits its usage. Previous studies explored factors contributing to the variability, yet the normalization models were based on a relatively limited number of participants (typically < 300 participants), required manual intervention, and were not implemented in an open-access comprehensive analysis pipeline. Another limitation is related to the imprecise prediction of the spinal levels when using vertebral levels as a reference; a question never addressed before in the search for a normalization method. In this study we implemented a method to measure CSA automatically from a spatial reference based on the central nervous system (the pontomedullary junction, PMJ), we investigated various factors to explain variability, and we developed normalization strategies on a large cohort (N=804).
Cervical spinal cord CSA was computed on T1w MRI scans for 804 participants from the UK Biobank database. In addition to computing cross-sectional at the C2-C3 vertebral disc, it was also measured at 64 mm caudal from the PMJ. The effect of various biological, demographic and anatomical factors was explored by computing
Pearson’s
correlation coefficients. A stepwise linear regression found significant predictors; the coefficients of the best fit model were used to normalize CSA.
The correlation between CSA measured at C2-C3 and using the PMJ was
y
= 0.98
x
+ 1.78 (
R
2
= 0.97). The best normalization model included thalamus volume, brain volume, sex and interaction between brain volume and sex. With this model, the coefficient of variation went down from 10.09% (without normalization) to 8.59%, a reduction of 14.85%.
In this study we identified factors explaining inter-subject variability of spinal cord CSA over a large cohort of participants, and developed a normalization model to reduce the variability. We implemented an approach, based on the PMJ, to measure CSA to overcome limitations associated with the vertebral reference. This approach warrants further validation, especially in longitudinal cohorts. The PMJ-based method and normalization models are readily available in the Spinal Cord Toolbox.