Integrating accompanying patients into clinical oncology teams: limiting and facilitating factors
Marie-Pascale Pomey
Jesseca Paquette
Monica Iliescu Nelea
Cécile Vialaron
Rim Mourad
Karine Bouchard
Louise Normandin
Marie‐Andrée Côté
Mado Desforges
Pénélope Pomey‐Carpentier
Israël Fortin
Isabelle Ganache
Zeev Rosberger
Danielle Charpentier
Marie-France Vachon
Lynda Bélanger
Michel Dorval
Djahanchah Philip Ghadiri
Mélanie Lavoie-Tremblay … (see 5 more)
Antoine Boivin
Jean-François Pelletier
Nicolas Fernandez
Alain M. Danino
Michèle de Guise
Learning self-supervised molecular representations for drug–drug interaction prediction
Rogia Kpanou
Patrick Dallaire
Elsa Rousseau
PERFUMES: pipeline to extract RNA functional motifs and exposed structures
Arnaud Chol
Roman Sarrazin-Gendron
Éric Lécuyer
Jérôme Waldispühl
Abstract Motivation Up to 75% of the human genome encodes RNAs. The function of many non-coding RNAs relies on their ability to fold into 3D… (see more) structures. Specifically, nucleotides inside secondary structure loops form non-canonical base pairs that help stabilize complex local 3D structures. These RNA 3D motifs can promote specific interactions with other molecules or serve as catalytic sites. Results We introduce PERFUMES, a computational pipeline to identify 3D motifs that can be associated with observable features. Given a set of RNA sequences with associated binary experimental measurements, PERFUMES searches for RNA 3D motifs using BayesPairing2 and extracts those that are over-represented in the set of positive sequences. It also conducts a thermodynamics analysis of the structural context that can support the interpretation of the predictions. We illustrate PERFUMES’ usage on the SNRPA protein binding site, for which the tool retrieved both previously known binder motifs and new ones. Availability and implementation PERFUMES is an open-source Python package (https://jwgitlab.cs.mcgill.ca/arnaud_chol/perfumes).
Reinforcement Learning for Versatile, Dynamic, and Robust Bipedal Locomotion Control
Zhongyu Li
Xue Bin Peng
Pieter Abbeel
Sergey Levine
Koushil Sreenath
This paper presents a comprehensive study on using deep reinforcement learning (RL) to create dynamic locomotion controllers for bipedal rob… (see more)ots. Going beyond focusing on a single locomotion skill, we develop a general control solution that can be used for a range of dynamic bipedal skills, from periodic walking and running to aperiodic jumping and standing. Our RL-based controller incorporates a novel dual-history architecture, utilizing both a long-term and short-term input/output (I/O) history of the robot. This control architecture, when trained through the proposed end-to-end RL approach, consistently outperforms other methods across a diverse range of skills in both simulation and the real world. The study also delves into the adaptivity and robustness introduced by the proposed RL system in developing locomotion controllers. We demonstrate that the proposed architecture can adapt to both time-invariant dynamics shifts and time-variant changes, such as contact events, by effectively using the robot’s I/O history. Additionally, we identify task randomization as another key source of robustness, fostering better task generalization and compliance to disturbances. The resulting control policies can be successfully deployed on Cassie, a torque-controlled human-sized bipedal robot. This work pushes the limits of agility for bipedal robots through extensive real-world experiments. We demonstrate a diverse range of locomotion skills, including: robust standing, versatile walking, fast running with a demonstration of a 400-meter dash, and a diverse set of jumping skills, such as standing long jumps and high jumps.
Reinforcement Learning for Versatile, Dynamic, and Robust Bipedal Locomotion Control
Zhongyu Li
Xue Bin Peng
Pieter Abbeel
Sergey Levine
Koushil Sreenath
This paper presents a comprehensive study on using deep reinforcement learning (RL) to create dynamic locomotion controllers for bipedal rob… (see more)ots. Going beyond focusing on a single locomotion skill, we develop a general control solution that can be used for a range of dynamic bipedal skills, from periodic walking and running to aperiodic jumping and standing. Our RL-based controller incorporates a novel dual-history architecture, utilizing both a long-term and short-term input/output (I/O) history of the robot. This control architecture, when trained through the proposed end-to-end RL approach, consistently outperforms other methods across a diverse range of skills in both simulation and the real world. The study also delves into the adaptivity and robustness introduced by the proposed RL system in developing locomotion controllers. We demonstrate that the proposed architecture can adapt to both time-invariant dynamics shifts and time-variant changes, such as contact events, by effectively using the robot’s I/O history. Additionally, we identify task randomization as another key source of robustness, fostering better task generalization and compliance to disturbances. The resulting control policies can be successfully deployed on Cassie, a torque-controlled human-sized bipedal robot. This work pushes the limits of agility for bipedal robots through extensive real-world experiments. We demonstrate a diverse range of locomotion skills, including: robust standing, versatile walking, fast running with a demonstration of a 400-meter dash, and a diverse set of jumping skills, such as standing long jumps and high jumps.
Reinforcement Learning for Versatile, Dynamic, and Robust Bipedal Locomotion Control
Zhongyu Li
Xue Bin Peng
Pieter Abbeel
Sergey Levine
Koushil Sreenath
This paper presents a comprehensive study on using deep reinforcement learning (RL) to create dynamic locomotion controllers for bipedal rob… (see more)ots. Going beyond focusing on a single locomotion skill, we develop a general control solution that can be used for a range of dynamic bipedal skills, from periodic walking and running to aperiodic jumping and standing. Our RL-based controller incorporates a novel dual-history architecture, utilizing both a long-term and short-term input/output (I/O) history of the robot. This control architecture, when trained through the proposed end-to-end RL approach, consistently outperforms other methods across a diverse range of skills in both simulation and the real world. The study also delves into the adaptivity and robustness introduced by the proposed RL system in developing locomotion controllers. We demonstrate that the proposed architecture can adapt to both time-invariant dynamics shifts and time-variant changes, such as contact events, by effectively using the robot’s I/O history. Additionally, we identify task randomization as another key source of robustness, fostering better task generalization and compliance to disturbances. The resulting control policies can be successfully deployed on Cassie, a torque-controlled human-sized bipedal robot. This work pushes the limits of agility for bipedal robots through extensive real-world experiments. We demonstrate a diverse range of locomotion skills, including: robust standing, versatile walking, fast running with a demonstration of a 400-meter dash, and a diverse set of jumping skills, such as standing long jumps and high jumps.
Reinforcement Learning for Versatile, Dynamic, and Robust Bipedal Locomotion Control
Zhongyu Li
Xue Bin Peng
Pieter Abbeel
Sergey Levine
Koushil Sreenath
This paper presents a comprehensive study on using deep reinforcement learning (RL) to create dynamic locomotion controllers for bipedal rob… (see more)ots. Going beyond focusing on a single locomotion skill, we develop a general control solution that can be used for a range of dynamic bipedal skills, from periodic walking and running to aperiodic jumping and standing. Our RL-based controller incorporates a novel dual-history architecture, utilizing both a long-term and short-term input/output (I/O) history of the robot. This control architecture, when trained through the proposed end-to-end RL approach, consistently outperforms other methods across a diverse range of skills in both simulation and the real world. The study also delves into the adaptivity and robustness introduced by the proposed RL system in developing locomotion controllers. We demonstrate that the proposed architecture can adapt to both time-invariant dynamics shifts and time-variant changes, such as contact events, by effectively using the robot’s I/O history. Additionally, we identify task randomization as another key source of robustness, fostering better task generalization and compliance to disturbances. The resulting control policies can be successfully deployed on Cassie, a torque-controlled human-sized bipedal robot. This work pushes the limits of agility for bipedal robots through extensive real-world experiments. We demonstrate a diverse range of locomotion skills, including: robust standing, versatile walking, fast running with a demonstration of a 400-meter dash, and a diverse set of jumping skills, such as standing long jumps and high jumps.
The Effects of a Digital Game Simulator versus a Traditional Intervention on Paramedics’ Neonatal Resuscitation Performance
Georg M. Schmölzer
Unsupervised Discovery of Steerable Factors When Graph Deep Generative Models Are Entangled
Shengchao Liu
Chengpeng Wang
Jiarui Lu
Weili Nie
Hanchen Wang
Zhuoxinran Li
Bolei Zhou
Asymmetric stimulus representations bias visual perceptual learning
Pooya Laamerad
Asmara Awada
Christopher C. Pack
The primate visual cortex contains various regions that exhibit specialization for different stimulus properties, such as motion, shape, and… (see more) color. Within each region there is often further specialization, such that particular stimulus features, such as horizontal and vertical orientations, are overrepresented. These asymmetries are associated with well-known perceptual biases, but little is known about how they influence visual learning. Most theories would predict that learning is optimal, in the sense that it is unaffected by these asymmetries. But other approaches to learning would result in specific patterns of perceptual biases. To distinguish between these possibilities, we trained human observers to discriminate between expanding and contracting motion patterns, which have a highly asymmetrical representation in visual cortex. Observers exhibited biased percepts of these stimuli, and these biases were affected by training in ways that were often suboptimal. We simulated different neural network models and found that a learning rule that involved only adjustments to decision criteria, rather than connection weights, could account for our data. These results suggest that cortical asymmetries influence visual perception and that human observers often rely on suboptimal strategies for learning.
A comparison of RL-based and PID controllers for 6-DOF swimming robots: hybrid underwater object tracking
Faraz Lotfi
Khalil Virji
Nicholas Dudek
Protocol for fever control using external cooling in mechanically ventilated patients with septic shock: SEPSISCOOL II randomised controlled trial
Armelle Guénégou-Arnoux
Juliette Murris
Stéphane Bechet
Camille Jung
Johann Auchabie
Julien Dupeyrat
Nadia Anguel
Pierre Asfar
Julio Badie
Dorothée Carpentier
Benjamin Chousterman
Jeremy Bourenne
Agathe Delbove
Jérôme Devaquet
Nicolas Deye
Anne-Florence Dureau
Jean-Baptiste Lascarrou
Stephane Legriel
Christophe Guitton … (see 14 more)
Caroline Jannière-Nartey
Jean-Pierre Quenot
Jean-Claude Lacherade
Julien Maizel
Armand Mekontso Dessap
Bruno Mourvillier
Philippe Petua
Gaetan Plantefeve
Jean-Christophe Richard
Alexandre Robert
Clément Saccheri
Ly Van Phach Vong
Sandrine Katsahian
Frédérique Schortgen