TRAIL : IA responsable pour les professionnels et les leaders
Apprenez à intégrer des pratique d'IA responsable dans votre organisation avec le programme TRAIL. Inscrivez-vous à la prochaine cohorte qui débutera le 15 avril.
Avantage IA : productivité dans la fonction publique
Apprenez à tirer parti de l’IA générative pour soutenir et améliorer votre productivité au travail. La prochaine cohorte se déroulera en ligne les 28 et 30 avril 2026.
Nous utilisons des témoins pour analyser le trafic et l’utilisation de notre site web, afin de personnaliser votre expérience. Vous pouvez désactiver ces technologies à tout moment, mais cela peut restreindre certaines fonctionnalités du site. Consultez notre Politique de protection de la vie privée pour en savoir plus.
Paramètre des cookies
Vous pouvez activer et désactiver les types de cookies que vous souhaitez accepter. Cependant certains choix que vous ferez pourraient affecter les services proposés sur nos sites (ex : suggestions, annonces personnalisées, etc.).
Cookies essentiels
Ces cookies sont nécessaires au fonctionnement du site et ne peuvent être désactivés. (Toujours actif)
Cookies analyse
Acceptez-vous l'utilisation de cookies pour mesurer l'audience de nos sites ?
Lecteur Multimédia
Acceptez-vous l'utilisation de cookies pour afficher et vous permettre de regarder les contenus vidéo hébergés par nos partenaires (YouTube, etc.) ?
Publications
A Two-step Heuristic for the Periodic Demand Estimation Problem
Freight carriers rely on tactical plans to satisfy demand in a cost-effective way. For computational tractability in real large-scale settin… (voir plus)gs, such plans are typically computed by solving deterministic and cyclic formulations. An important input is the periodic demand, i.e., the demand that is expected to repeat in each period of the planning horizon. Motivated by the discrepancy between time series forecasts of demand in each period and the periodic demand, Laage et al. (2021) recently introduced the Periodic Demand Estimation (PDE) problem and showed that it has a high value. However, they made strong assumptions on the solution space so that the problem could be solved by enumeration. In this paper we significantly extend their work. We propose a new PDE formulation that relaxes the strong assumptions on the solution space. We solve large instances of this formulation with a two-step heuristic. The first step reduces the dimension of the feasible space by performing clustering of commodities based on instance-specific information about demand and supply interactions. The formulation along with the first step allow to solve the problem in a second step by either metaheuristics or the state-of-the-art black-box optimization solver NOMAD. In an extensive empirical study using real data from the Canadian National Railway Company, we show that our methodology produces high quality solutions and outperforms existing ones.
In a companion paper by Cohen-Adad et al. we introduce the spine generic quantitative MRI protocol that provides valuable metrics for assess… (voir plus)ing spinal cord macrostructural and microstructural integrity. This protocol was used to acquire a single subject dataset across 19 centers and a multi-subject dataset across 42 centers (for a total of 260 participants), spanning the three main MRI manufacturers: GE, Philips and Siemens. Both datasets are publicly available via git-annex. Data were analysed using the Spinal Cord Toolbox to produce normative values as well as inter/intra-site and inter/intra-manufacturer statistics. Reproducibility for the spine generic protocol was high across sites and manufacturers, with an average inter-site coefficient of variation of less than 5% for all the metrics. Full documentation and results can be found at https://spine-generic.rtfd.io/. The datasets and analysis pipeline will help pave the way towards accessible and reproducible quantitative MRI in the spinal cord.
Recent years have seen a dramatic increase in studies measuring brain activity, physiological responses, and/or movement data from multiple … (voir plus)individuals during social interaction. For example, so-called “hyperscanning” research has demonstrated that brain activity may become synchronized across people as a function of a range of factors. Such findings not only underscore the potential of hyperscanning techniques to capture meaningful aspects of naturalistic interactions, but also raise the possibility that hyperscanning can be leveraged as a tool to help improve such naturalistic interactions. Building on our previous work showing that exposing dyads to real-time inter-brain synchrony neurofeedback may help boost their interpersonal connectedness, we describe the biofeedback application Hybrid Harmony, a Brain-Computer Interface (BCI) that supports the simultaneous recording of multiple neurophysiological datastreams and the real-time visualization and sonification of inter-subject synchrony. We report results from 236 dyads experiencing synchrony neurofeedback during naturalistic face-to-face interactions, and show that pairs' social closeness and affective personality traits can be reliably captured with the inter-brain synchrony neurofeedback protocol, which incorporates several different online inter-subject connectivity analyses that can be applied interchangeably. Hybrid Harmony can be used by researchers who wish to study the effects of synchrony biofeedback, and by biofeedback artists and serious game developers who wish to incorporate multiplayer situations into their practice.
The connection patterns of neural circuits in the brain form a complex network. Collective signaling within the network manifests as pattern… (voir plus)ed neural activity, and is thought to support human cognition and adaptive behavior. Recent technological advances permit macro-scale reconstructions of biological brain networks. These maps, termed connectomes, display multiple non-random architectural features, including heavy-tailed degree distributions, segregated communities and a densely interconnected core. Yet, how computation and functional specialization emerge from network architecture remains unknown. Here we reconstruct human brain connectomes using
in vivo
diffusion-weighted imaging, and use reservoir computing to implement these connectomes as artificial neural networks. We then train these neuromorphic networks to learn a cognitive task. We show that biologically realistic neural architectures perform optimally when they display critical dynamics. We find that performance is driven by network topology, and that the modular organization of large-scale functional systems is computationally relevant. Throughout, we observe a prominent interaction between network structure and dynamics, such that the same underlying architecture can support a wide range of learning capacities across dynamical regimes. This work opens new opportunities to discover how the network organization of the brain optimizes cognitive capacity, conceptually bridging neuroscience and artificial intelligence.
Neocortical inhibitory interneuron subtypes are differentially attuned to synchrony- and rate-coded information
Luke Y. Prince
Matthew M. Tran
Dorian Grey
Lydia Saad
Helen Chasiotis
Jeehyun Kwag
Michael M. Kohl
Blake A. Richards
Neurons can carry information with both the synchrony and rate of their spikes. However, it is unknown whether distinct subtypes of neurons … (voir plus)are more sensitive to information carried by synchrony versus rate, or vice versa. Here, we address this question using patterned optical stimulation in slices of somatosensory cortex from mouse lines labelling fast-spiking (FS) and regular-spiking (RS) interneurons. We used optical stimulation in layer 2/3 to encode a 1-bit signal using either the synchrony or rate of activity. We then examined the mutual information between this signal and the interneuron responses. We found that for a synchrony encoding, FS interneurons carried more information in the first five milliseconds, while both interneuron subtypes carried more information than excitatory neurons in later responses. For a rate encoding, we found that RS interneurons carried more information after several milliseconds. These data demonstrate that distinct interneuron subtypes in the neocortex have distinct sensitivities to synchrony versus rate codes.
Barriers and facilitators to patient engagement in patient safety from patients and healthcare professionals' perspectives: A systematic review and meta-synthesis.
Zahra Chegini
Morteza Arab‐Zozani
Sheikh Mohammed Shariful Islam
Georgia Tobiano
S. A. Rahimi
AIMS
To explore patients' and healthcare professionals' (HCPs) perceived barriers and facilitators to patient engagement in patient safety.
… (voir plus)
METHODS
We conducted a systematic review and meta-synthesis from five computerized databases, including PubMed/MEDLINE, Embase, Web of Science, Scopus and PsycINFO, as well as grey literature and reference lists of included studies. Data were last searched in December 2019 with no limitation on the year of publication. Qualitative and Mix-methods studies that explored HCPs' and patients' perceptions of barriers and facilitators to patient engagement in patient safety were included. Two authors independently screened the titles and the abstracts of studies. Next, the full texts of the screened studies were reviewed by two authors. Potential discrepancies were resolved by consensus with a third author. The Mixed Methods Appraisal Tool was used for quality appraisal. Thematic analysis was used to synthesize results.
RESULTS
Nineteen studies out of 2616 were included in this systematic review. Themes related to barriers included: patient unwillingness, HCPs' unwillingness, and inadequate infrastructures. Themes related to facilitators were: encouraging patients, sharing information with patients, establishing trustful relationship, establishing patient-centred care and improving organizational resources.
CONCLUSION
Patients have an active role in improving their safety. Strategies are required to address barriers that hinder or prevent patient engagement and create capacity and facilitate action.
The field of Continual Learning (CL) seeks to develop algorithms that accumulate knowledge and skills over time through interaction with non… (voir plus)-stationary environments. In practice, a plethora of evaluation procedures (settings) and algorithmic solutions (methods) exist, each with their own potentially disjoint set of assumptions. This variety makes measuring progress in CL difficult. We propose a taxonomy of settings, where each setting is described as a set of assumptions. A tree-shaped hierarchy emerges from this view, where more general settings become the parents of those with more restrictive assumptions. This makes it possible to use inheritance to share and reuse research, as developing a method for a given setting also makes it directly applicable onto any of its children. We instantiate this idea as a publicly available software framework called Sequoia, which features a wide variety of settings from both the Continual Supervised Learning (CSL) and Continual Reinforcement Learning (CRL) domains. Sequoia also includes a growing suite of methods which are easy to extend and customize, in addition to more specialized methods from external libraries. We hope that this new paradigm and its first implementation can help unify and accelerate research in CL. You can help us grow the tree by visiting www.github.com/lebrice/Sequoia.
An Advanced Noise Reduction and Edge Enhancement Algorithm
Shih-Chia Huang
Quoc-Viet Hoang
Trung-Hieu Le
Yan-Tsung Peng
Ching-Chun Huang
Cheng Zhang
Benjamin C. M. Fung
Kai-Han Cheng
Sha-Wo Huang
Complementary metal-oxide-semiconductor (CMOS) image sensors can cause noise in images collected or transmitted in unfavorable environments,… (voir plus) especially low-illumination scenarios. Numerous approaches have been developed to solve the problem of image noise removal. However, producing natural and high-quality denoised images remains a crucial challenge. To meet this challenge, we introduce a novel approach for image denoising with the following three main contributions. First, we devise a deep image prior-based module that can produce a noise-reduced image as well as a contrast-enhanced denoised one from a noisy input image. Second, the produced images are passed through a proposed image fusion (IF) module based on Laplacian pyramid decomposition to combine them and prevent noise amplification and color shift. Finally, we introduce a progressive refinement (PR) module, which adopts the summed-area tables to take advantage of spatially correlated information for edge and image quality enhancement. Qualitative and quantitative evaluations demonstrate the efficiency, superiority, and robustness of our proposed method.
This paper explores the task of Difficulty-Controllable Question Generation (DCQG), which aims at generating questions with required difficu… (voir plus)lty levels. Previous research on this task mainly defines the difficulty of a question as whether it can be correctly answered by a Question Answering (QA) system, lacking interpretability and controllability. In our work, we redefine question difficulty as the number of inference steps required to answer it and argue that Question Generation (QG) systems should have stronger control over the logic of generated questions. To this end, we propose a novel framework that progressively increases question difficulty through step-by-step rewriting under the guidance of an extracted reasoning chain. A dataset is automatically constructed to facilitate the research, on which extensive experiments are conducted to test the performance of our method.
2021-07-31
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) (publié)
With the need of fast retrieval speed and small memory footprint, document hashing has been playing a crucial role in large-scale informatio… (voir plus)n retrieval. To generate high-quality hashing code, both semantics and neighborhood information are crucial. However, most existing methods leverage only one of them or simply combine them via some intuitive criteria, lacking a theoretical principle to guide the integration process. In this paper, we encode the neighborhood information with a graph-induced Gaussian distribution, and propose to integrate the two types of information with a graph-driven generative model. To deal with the complicated correlations among documents, we further propose a tree-structured approximation method for learning. Under the approximation, we prove that the training objective can be decomposed into terms involving only singleton or pairwise documents, enabling the model to be trained as efficiently as uncorrelated ones. Extensive experimental results on three benchmark datasets show that our method achieves superior performance over state-of-the-art methods, demonstrating the effectiveness of the proposed model for simultaneously preserving semantic and neighborhood information.
2021-07-31
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) (publié)