Portrait de Hsiu-Chin Lin

Hsiu-Chin Lin

Membre académique associé
Professeure adjointe, McGill University, Département de génie électrique et informatique
Sujets de recherche
Apprentissage par renforcement
Apprentissage profond
Changement climatique
Détection hors distribution (OOD)
Navigation robotique autonome
Robotique

Biographie

Hsiu-Chin Lin est professeure adjointe à l'École d'informatique et au Département de génie électrique et informatique de l'Université McGill. Ses recherches portent sur le contrôle du mouvement basé sur des modèles, l'optimisation et l'apprentissage automatique pour la planification du mouvement. Elle s'intéresse particulièrement à l'adaptation du mouvement des robots dans des environnements dynamiques pour les manipulateurs et les robots quadrupèdes. Avant de travailler à McGill, elle a été associée de recherche à l'Université d'Édimbourg et à l'Université de Birmingham. Elle a obtenu un doctorat de l'Université d'Édimbourg pour ses travaux sur l'apprentissage des robots.

Étudiants actuels

Doctorat - McGill
Co-superviseur⋅e :
Maîtrise recherche - McGill
Superviseur⋅e principal⋅e :
Maîtrise recherche - McGill
Superviseur⋅e principal⋅e :
Maîtrise recherche - McGill

Publications

Globally Stable Neural Imitation Policies
Amin Abyaneh
Mariana Sosa Guzmán
Improving Generalization in Reinforcement Learning Training Regimes for Social Robot Navigation
In order for autonomous mobile robots to navigate in human spaces, they must abide by our social norms. Reinforcement learning (RL) has emer… (voir plus)ged as an effective method to train robot sequential decision-making policies that are able to respect these norms. However, a large portion of existing work in the field conducts both RL training and testing in simplistic environments. This limits the generalization potential of these models to unseen environments, and undermines the meaningfulness of their reported results. We propose a method to improve the generalization performance of RL social navigation methods using curriculum learning. By employing multiple environment types and by modeling pedestrians using multiple dynamics models, we are able to progressively diversify and escalate difficulty in training. Our results show that the use of curriculum learning in training can be used to achieve better generalization performance than previous training methods. We also show that results presented in many existing state-of-the art RL social navigation works do not evaluate their methods outside of their training environments, and thus do not reflect their policies' failure to adequately generalize to out-of-distribution scenarios. In response, we validate our training approach on larger and more crowded testing environments than those used in training, allowing for more meaningful measurements of model performance.
Learning Lyapunov-Stable Polynomial Dynamical Systems Through Imitation
Amin Abyaneh
Imitation learning is a paradigm to address complex motion planning problems by learning a policy to imitate an expert's behavior. However, … (voir plus)relying solely on the expert's data might lead to unsafe actions when the robot deviates from the demonstrated trajectories. Stability guarantees have previously been provided utilizing nonlinear dynamical systems, acting as high-level motion planners, in conjunction with the Lyapunov stability theorem. Yet, these methods are prone to inaccurate policies, high computational cost, sample inefficiency, or quasi stability when replicating complex and highly nonlinear trajectories. To mitigate this problem, we present an approach for learning a globally stable nonlinear dynamical system as a motion planning policy. We model the nonlinear dynamical system as a parametric polynomial and learn the polynomial's coefficients jointly with a Lyapunov candidate. To showcase its success, we compare our method against the state of the art in simulation and conduct real-world experiments with the Kinova Gen3 Lite manipulator arm. Our experiments demonstrate the sample efficiency and reproduction accuracy of our method for various expert trajectories, while remaining stable in the face of perturbations.
Generating Stable and Collision-Free Policies through Lyapunov Function Learning
Alexandre Coulombe
The need for rapid and reliable robot deployment is on the rise. Imitation Learning (IL) has become popular for producing motion planning po… (voir plus)licies from a set of demonstrations. However, many methods in IL are not guaranteed to produce stable policies. The generated policy may not converge to the robot target, reducing reliability, and may collide with its environment, reducing the safety of the system. Stable Estimator of Dynamic Systems (SEDS) produces stable policies by constraining the Lyapunov stability criteria during learning, but the Lyapunov candidate function had to be manually selected. In this work, we propose a novel method for learning a Lyapunov function and a collision-free policy using a single neural network model. The method can be equipped with an obstacle avoidance module for convex object pairs to guarantee no collisions. We demonstrated our method is capable of finding policies in several simulation environments and transfer to a real-world scenario.
Learning Lyapunov-Stable Polynomial Dynamical Systems Through Imitation
Amin Abyaneh
Imitation learning is a paradigm to address complex motion planning problems by learning a policy to imitate an expert’s behavior. However… (voir plus), relying solely on the expert’s data might lead to unsafe actions when the robot deviates from the demonstrated trajectories. Stability guarantees have previously been provided utilizing nonlinear dynamical systems, acting as high-level motion planners, in conjunction with the Lyapunov stability theorem. Yet, these methods are prone to inaccurate policies, high computational cost, sample inefficiency, or quasi stability when replicating complex and highly nonlinear trajectories. To mitigate this problem, we present an approach for learning a globally stable nonlinear dynamical system as a motion planning policy. We model the nonlinear dynamical system as a parametric polynomial and learn the polynomial’s coefficients jointly with a Lyapunov candidate. To showcase its success, we compare our method against the state of the art in simulation and conduct real-world experiments with the Kinova Gen3 Lite manipulator arm. Our experiments demonstrate the sample efficiency and reproduction accuracy of our method for various expert trajectories, while remaining stable in the face of perturbations.
Fractal impedance for passive controllers: a framework for interaction robotics
Keyhan Kouhkiloui Babarahmati
Carlo Tiseo
Joshua Smith
M. S. Erden
Michael Nalin Mistry