Nous utilisons des témoins pour analyser le trafic et l’utilisation de notre site web, afin de personnaliser votre expérience. Vous pouvez désactiver ces technologies à tout moment, mais cela peut restreindre certaines fonctionnalités du site. Consultez notre Politique de protection de la vie privée pour en savoir plus.
Paramètre des cookies
Vous pouvez activer et désactiver les types de cookies que vous souhaitez accepter. Cependant certains choix que vous ferez pourraient affecter les services proposés sur nos sites (ex : suggestions, annonces personnalisées, etc.).
Cookies essentiels
Ces cookies sont nécessaires au fonctionnement du site et ne peuvent être désactivés. (Toujours actif)
Cookies analyse
Acceptez-vous l'utilisation de cookies pour mesurer l'audience de nos sites ?
Multimedia Player
Acceptez-vous l'utilisation de cookies pour afficher et vous permettre de regarder les contenus vidéo hébergés par nos partenaires (YouTube, etc.) ?
Publications
Prioritization of patients access to outpatient augmentative and alternative communication services in Quebec: a decision tool
Abstract Purpose A large number of people living with a chronic disability wait a long time to access publicly funded rehabilitation service… (voir plus)s such as Augmentative and Alternative Communication (AAC) services, and there is no standardized tool to prioritize these patients. We aimed to develop a prioritization tool to improve the organization and access to the care for this population. Methods In this sequential mixed methods study, we began with a qualitative phase in which we conducted semi-structured interviews with 14 stakeholders including patients, their caregivers, and AAC service providers in Quebec City, Canada to gather their ideas about prioritization criteria. Then, during a half-day consensus group meeting with stakeholders, using a consensus-seeking technique (i.e. Technique for Research of Information by Animation of a Group of Experts), we reached consensus on the most important prioritization criteria. These criteria informed the quantitative phase in which used an electronic questionnaire to collect stakeholders’ views regarding the relative weights for each of the selected criteria. We analyzed these data using a hybrid quantitative method called group based fuzzy analytical hierarchy process, to obtain the importance weights of the selected eight criteria. Results Analyses of the interviews revealed 48 criteria. Collectively, the stakeholders reached consensus on eight criteria, and through the electronic questionnaire they defined the selected criteria’s importance weights. The selected eight prioritization criteria and their importance weights are: person’s safety (weight: 0.274), risks development potential (weight: 0.144), psychological well-being (weight: 0.140), physical well-being (weight: 0.124), life prognosis (weight: 0.106), possible impact on social environment (weight: 0.085), interpersonal relationships (weight: 0.073), and responsibilities and social role (weight: 0.054). Conclusion In this study, we co-developed a prioritization decision tool with the key stakeholders for prioritization of patients who are referred to AAC services in rehabilitation settings. IMPLICATIONS FOR REHABILIATION Studies in Canada have shown that people in Canada with a need for rehabilitation services are not receiving publicly available services in a timely manner. There is no standardized tool for the prioritization of AAC patients. In this mixed methods study, we co-developed a prioritization tool with key stakeholders for prioritization of patients who are referred to AAC services in a rehabilitation center in Quebec, Canada.
2020-06-05
Disability and Rehabilitation: Assistive Technology (published)
We present a reduction from reinforcement learning (RL) to no-regret online learning based on the saddle-point formulation of RL, by which "… (voir plus)any" online algorithm with sublinear regret can generate policies with provable performance guarantees. This new perspective decouples the RL problem into two parts: regret minimization and function approximation. The first part admits a standard online-learning analysis, and the second part can be quantified independently of the learning algorithm. Therefore, the proposed reduction can be used as a tool to systematically design new RL algorithms. We demonstrate this idea by devising a simple RL algorithm based on mirror descent and the generative-model oracle. For any
2020-06-03
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics (publié)
Recent advances in variational inference enable the modelling of highly structured joint distributions, but are limited in their capacity to… (voir plus) scale to the high-dimensional setting of stochastic neural networks. This limitation motivates a need for scalable parameterizations of the noise generation process, in a manner that adequately captures the dependencies among the various parameters. In this work, we address this need and present the Kronecker Flow, a generalization of the Kronecker product to invertible mappings designed for stochastic neural networks. We apply our method to variational Bayesian neural networks on predictive tasks, PAC-Bayes generalization bound estimation, and approximate Thompson sampling in contextual bandits. In all setups, our methods prove to be competitive with existing methods and better than the baselines.
2020-06-03
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics (publié)
Abstraction can improve the sample efficiency of reinforcement learning. However, the process of abstraction inherently discards information… (voir plus), potentially compromising an agent’s ability to represent high-value policies. To mitigate this, we here introduce combinations of state abstractions and options that are guaranteed to preserve the representation of near-optimal policies. We first define φ-relative options, a general formalism for analyzing the value loss of options paired with a state abstraction, and present necessary and sufficient conditions for φ-relative options to preserve near-optimal behavior in any finite Markov Decision Process. We further show that, under appropriate assumptions, φ-relative options can be composed to induce hierarchical abstractions that are also guaranteed to represent high-value policies.ion can improve the sample efficiency of reinforcement learning. However, the process of abstraction inherently discards information, potentially compromising an agent’s ability to represent high-value policies. To mitigate this, we here introduce combinations of state abstractions and options that are guaranteed to preserve the representation of near-optimal policies. We first define φ-relative options, a general formalism for analyzing the value loss of options paired with a state abstraction, and present necessary and sufficient conditions for φ-relative options to preserve near-optimal behavior in any finite Markov Decision Process. We further show that, under appropriate assumptions, φ-relative options can be composed to induce hierarchical abstractions that are also guaranteed to represent high-value policies.
2020-06-03
International Conference on Artificial Intelligence and Statistics (published)