TRAIL : IA responsable pour les professionnels et les leaders
Apprenez à intégrer des pratique d'IA responsable dans votre organisation avec le programme TRAIL. Inscrivez-vous à la prochaine cohorte qui débutera le 15 avril.
Avantage IA : productivité dans la fonction publique
Apprenez à tirer parti de l’IA générative pour soutenir et améliorer votre productivité au travail. La prochaine cohorte se déroulera en ligne les 28 et 30 avril 2026.
Nous utilisons des témoins pour analyser le trafic et l’utilisation de notre site web, afin de personnaliser votre expérience. Vous pouvez désactiver ces technologies à tout moment, mais cela peut restreindre certaines fonctionnalités du site. Consultez notre Politique de protection de la vie privée pour en savoir plus.
Paramètre des cookies
Vous pouvez activer et désactiver les types de cookies que vous souhaitez accepter. Cependant certains choix que vous ferez pourraient affecter les services proposés sur nos sites (ex : suggestions, annonces personnalisées, etc.).
Cookies essentiels
Ces cookies sont nécessaires au fonctionnement du site et ne peuvent être désactivés. (Toujours actif)
Cookies analyse
Acceptez-vous l'utilisation de cookies pour mesurer l'audience de nos sites ?
Lecteur Multimédia
Acceptez-vous l'utilisation de cookies pour afficher et vous permettre de regarder les contenus vidéo hébergés par nos partenaires (YouTube, etc.) ?
Multi-hybrid architectures are poised to take over language modeling due to better quality and performance. We introduce a hierarchical deco… (voir plus)mposition framework for linear recurrences that allows us to develop algorithms aligned with GPU memory hierarchies, yielding Sliding Window Recurrences. We focus specifically on truncating recurrences to hardware-aligned windows which are naturally jagged, limiting costly inter-warp communication. Using SWR, we develop Phalanx layers that serve as drop-in replacements for windowed attention or linear recurrences. In 1B parameter multi-hybrid models, Phalanx achieves over 10-40% speedup across 4K to 32K context length over optimized Transformers while matching perplexity.
Decoding fine-grained movement from non-invasive surface Electromyography (sEMG) is a challenge for prosthetic control due to signal non-sta… (voir plus)tionarity and low signal-to-noise ratios. Generic self-supervised learning (SSL) frameworks often yield suboptimal results on sEMG as they attempt to reconstruct noisy raw signals and lack the inductive bias to model the cylindrical topology of electrode arrays. To overcome these limitations, we introduce SPECTRE, a domain-specific SSL framework. SPECTRE features two primary contributions: a physiologically-grounded pre-training task and a novel positional encoding. The pre-training involves masked prediction of discrete pseudo-labels from clustered Short-Time Fourier Transform (STFT) representations, compelling the model to learn robust, physiologically relevant frequency patterns. Additionally, our Cylindrical Rotary Position Embedding (CyRoPE) factorizes embeddings along linear temporal and annular spatial dimensions, explicitly modeling the forearm sensor topology to capture muscle synergies. Evaluations on multiple datasets, including challenging data from individuals with amputation, demonstrate that SPECTRE establishes a new state-of-the-art for movement decoding, significantly outperforming both supervised baselines and generic SSL approaches. Ablation studies validate the critical roles of both spectral pre-training and CyRoPE. SPECTRE provides a robust foundation for practical myoelectric interfaces capable of handling real-world sEMG complexities.
Developers are widely using AI code-generation models, aiming to increase productivity and efficiency. However, there are also quality conce… (voir plus)rns regarding the AI-generated code. The generated code is produced by models trained on publicly available code, which are known to contain bugs and quality issues. Those issues can cause trust and maintenance challenges during the development process. Several quality issues associated with AI-generated code have been reported, including bugs and defects. However, these findings are often scattered and lack a systematic summary. A comprehensive review is currently lacking to reveal the types and distribution of these errors, possible remediation strategies, as well as their correlation with the specific models. In this paper, we systematically analyze the existing AI-generated code literature to establish an overall understanding of bugs and defects in generated code, providing a reference for future model improvement and quality assessment. We aim to understand the nature and extent of bugs in AI-generated code, and provide a classification of bug types and patterns present in code generated by different models. We also discuss possible fixes and mitigation strategies adopted to eliminate bugs from the generated code.
To determine the optimal locations for electric vehicle charging stations, optimisation models need to predict which charging stations users… (voir plus) will select. We estimate discrete choice models to predict the usage of charging stations using only readily available information for charging network operators. Our parameter values are estimated from a unique, revealed preferences dataset of charging sessions in Montreal, Quebec. We find that user distance to stations, proximity to home areas, and the number of outlets at each station are significant factors for predicting station usage. Additionally, amenities near charging stations have a neutral effect overall, with some users demonstrating strong preference or aversion for these locations. High variability among the preferences of users highlight the importance of models which incorporate panel effects. Moreover, integrating mixed logit models within the optimization of charging station network design yields high-quality solutions, even when evaluated under other model specifications.
2025-11-30
Transportation Research Part D: Transport and Environment (publié)
Dexterous manipulation is challenging because it requires understanding how subtle hand motion influences the environment through contact wi… (voir plus)th objects. We introduce DexWM, a Dexterous Manipulation World Model that predicts the next latent state of the environment conditioned on past states and dexterous actions. To overcome the scarcity of dexterous manipulation datasets, DexWM is trained on over 900 hours of human and non-dexterous robot videos. To enable fine-grained dexterity, we find that predicting visual features alone is insufficient; therefore, we introduce an auxiliary hand consistency loss that enforces accurate hand configurations. DexWM outperforms prior world models conditioned on text, navigation, and full-body actions, achieving more accurate predictions of future states. DexWM also demonstrates strong zero-shot generalization to unseen manipulation skills when deployed on a Franka Panda arm equipped with an Allegro gripper, outperforming Diffusion Policy by over 50% on average in grasping, placing, and reaching tasks.
Cognitive deficits are common across many neurodevelopmental and psychiatric conditions, including those studied in the current set of PGC-C… (voir plus)NV papers. How changes in regional gene expression across the cerebral cortex influence cognitive ability remains unknown. Population variation in gene dosage—which significantly impacts gene expression—represents a unique paradigm to address this question. We developed a cerebral-cortex gene-set burden analysis (CC-GSBA) to associate a trait with genomic deletions and duplications that disrupt genes with similar expression profiles across 180 cortical regions. We performed CC-GSBA across 180 cortical regions to test associations with cognitive ability in 260,000 individuals from general population cohorts. Most cortical gene sets were associated with a decrease in cognitive ability when deleted or duplicated, and this novel approach revealed opposing cortical patterns for the effect sizes of deletions and duplications. These cortical patterns of effect sizes followed the cortical gradient previously characterized at the molecular, cellular, and functional levels. We show that genes with preferential expression in sensorimotor regions demonstrated the largest effect on cognition when deleted. At the opposing end of the cortical gradient, genes with preferential expression in multimodal association regions affected cognition the most when duplicated. These two gene dosage cortical patterns could not be explained by particular cell types, developmental epochs, or genetic constraints, highlighting the fact that the macroscopic network organization of the cerebral cortex is key to understanding the effects of gene dosage on cognitive traits.
Enhancing decision-making in glioblastoma surgery through an explainable human-Al collaboration: an international multicenter model development and external validation study
Surgical resection improves survival in glioblastoma, yet predicting the extent of resection (EOR) remains highly challenging. We developed … (voir plus)and externally validated an explainable AI model to generate personalized EOR estimates in 811 glioblastoma patients undergoing microsurgical resection. EOR was categorized into gross-total (GTR), near-total (NTR), and subtotal resections (STR). An interpretable framework provided model explanations and sensitivity analyses to assess the model’s strengths and limitations. To demonstrate clinical impact, we compared the performance of the human expert (gold standard) with our AI model and a combined human-AI approach. External validation confirmed generalizability (AUC 0.78, CI 0.73-0.82). Class-specific AUCs were 0.75 (0.67-0.82) for GTR, 0.59 (0.50-0.69) for NTR, and 0.69 (0.53-0.85) for STR. Key predictors included KPS and NANO scores, age, tumor volume, and unfavorable anatomical locations. A combined human-AI collaboration outperformed human experts, with higher overall accuracies (0.53 to 0.94), F1 scores (0.30 to 0.92), and Cohen’s κ (0.41 to 0.84). Enhancing predictive performance through the clinician-AI collaboration, our explainable model supports preoperative planning and highlights the value of integrating machine intelligence into surgical decision-making.
The frequency dependence of backscattered radiofrequency (RF) signals produced by ultrasound scanners carries rich information related to th… (voir plus)e tissue microstructure (i.e., scatterer size, attenuation). This information can be sue to classify tissues based on microstructural changes associated to disease onset and progression. Conventional convolutional neural networks (CNNs) can learn this information directly from radio-frequency (RF) data, but they often struggle to achieve adequate frequency selectivity. This increases model complexity and convergence time, and limits generalization. To overcome these challenges, SincNet, originally developed for speech processing, was adapted to classify RF data based on differences in frequency properties. Rather than learning every filter coefficient, SincNet only learns each filter's low frequency and bandwidth, dramatically reducing the number of parameters and improving frequency resolution. For model interpretability, a Gradient-Weighted Filter Contribution is introduced, which highlights the importance of spectral bands. The approach was validated on three datasets: simulated data with different scatterer sizes, experimental phantom data, and in vivo data of rats which were fed a methionine and choline- deficient diet to develop liver steatosis, inflammation, and fibrosis. The modified SincNet consistently achieved the best results in material/tissue classifications.
2025-11-26
IEEE transactions on bio-medical engineering (publié)
Trust is foundational to patient-physician relationships and is associated with improved care-seeking and adherence in primary care. However… (voir plus), validated trust instruments for pediatric emergency and surgical contexts are lacking, and traditional instrument development is slow and resource-intensive. Large language models (LLMs) could streamline the validation process by serving as scalable, systematic expert panel surrogates.
We developed four new trust assessment instruments: two for patient-families and two for physicians. Two-phase content validation was conducted using two parallel synthetic and human expert panels. Synthetic panels consisted of three persona-prompted LLMs (Claude Sonnet 4, GPT-5, Grok4). Human panels served as traditional comparators. Scale-Content Validity Index (S-CVI) and Fleiss’ kappa (k) acceptance thresholds were set at ≥0.80.
Combined human–synthetic expert panels revealed substantial inter-rater reliability across all instruments. Fleiss’ kvalues for dimensional validation were: patient-family = 0.84 (95% CI [0.72, 0.96]), physician = 0.87 (95% CI [0.72, 1.00]);contextual validation: patient-family = 0.83 (95% CI [0.73, 0.93]), physician = 0.88 (95% CI [0.80, 0.96]). All instruments exceeded S-CVI ≥0.80 thresholds across both validation phases.
Persona-prompted LLMs demonstrated comparable validity outcomes to human experts while accelerating validation timelines from months to weeks. Future research needs to evaluate this approach across psychometric testing phases.
This synthetic instrument validation methodology offers a scalable blueprint for healthcare measurement development, enabling faster creation of validated tools to support evidence-based patient care.