Portrait of Hsiu-Chin Lin

Hsiu-Chin Lin

Associate Academic Member
Assistant Professor, McGill University, School of Computer Science
Research Topics
Autonomous Robotics Navigation
Climate Change
Deep Learning
Out-of-Distribution (OOD) Detection
Reinforcement Learning
Robotics

Biography

Hsiu-Chin Lin is an assistant professor at the School of Computer Science and in the Department of Electrical and Computer Engineering at McGill University.

Her research spans model-based motion control, optimization and machine learning for motion planning. She is particularly interested in adapting robot motion in dynamic environments for manipulators and quadruped robots.

Lin was formerly a research associate at the University of Edinburgh and the University of Birmingham. Her PhD research at the University of Edinburgh was on robot learning.

Current Students

PhD - McGill University
PhD - McGill University
Co-supervisor :
Master's Research - McGill University
Principal supervisor :
Master's Research - McGill University
Principal supervisor :
Master's Research - McGill University

Publications

Single-Shot Learning of Stable Dynamical Systems for Long-Horizon Manipulation Tasks
Alexandre St-Aubin
Amin Abyaneh
Mastering complex sequential tasks continues to pose a significant challenge in robotics. While there has been progress in learning long-hor… (see more)izon manipulation tasks, most existing approaches lack rigorous mathematical guarantees for ensuring reliable and successful execution. In this paper, we extend previous work on learning long-horizon tasks and stable policies, focusing on improving task success rates while reducing the amount of training data needed. Our approach introduces a novel method that (1) segments long-horizon demonstrations into discrete steps defined by waypoints and subgoals, and (2) learns globally stable dynamical system policies to guide the robot to each subgoal, even in the face of sensory noise and random disturbances. We validate our approach through both simulation and real-world experiments, demonstrating effective transfer from simulation to physical robotic platforms. Code is available at https://github.com/Alestaubin/stable-imitation-policy-with-waypoints
Globally Stable Neural Imitation Policies
Amin Abyaneh
Mariana Sosa Guzmán
Improving Generalization in Reinforcement Learning Training Regimes for Social Robot Navigation
In order for autonomous mobile robots to navigate in human spaces, they must abide by our social norms. Reinforcement learning (RL) has emer… (see more)ged as an effective method to train robot sequential decision-making policies that are able to respect these norms. However, a large portion of existing work in the field conducts both RL training and testing in simplistic environments. This limits the generalization potential of these models to unseen environments, and undermines the meaningfulness of their reported results. We propose a method to improve the generalization performance of RL social navigation methods using curriculum learning. By employing multiple environment types and by modeling pedestrians using multiple dynamics models, we are able to progressively diversify and escalate difficulty in training. Our results show that the use of curriculum learning in training can be used to achieve better generalization performance than previous training methods. We also show that results presented in many existing state-of-the art RL social navigation works do not evaluate their methods outside of their training environments, and thus do not reflect their policies' failure to adequately generalize to out-of-distribution scenarios. In response, we validate our training approach on larger and more crowded testing environments than those used in training, allowing for more meaningful measurements of model performance.
Learning Lyapunov-Stable Polynomial Dynamical Systems Through Imitation
Amin Abyaneh
Imitation learning is a paradigm to address complex motion planning problems by learning a policy to imitate an expert's behavior. However, … (see more)relying solely on the expert's data might lead to unsafe actions when the robot deviates from the demonstrated trajectories. Stability guarantees have previously been provided utilizing nonlinear dynamical systems, acting as high-level motion planners, in conjunction with the Lyapunov stability theorem. Yet, these methods are prone to inaccurate policies, high computational cost, sample inefficiency, or quasi stability when replicating complex and highly nonlinear trajectories. To mitigate this problem, we present an approach for learning a globally stable nonlinear dynamical system as a motion planning policy. We model the nonlinear dynamical system as a parametric polynomial and learn the polynomial's coefficients jointly with a Lyapunov candidate. To showcase its success, we compare our method against the state of the art in simulation and conduct real-world experiments with the Kinova Gen3 Lite manipulator arm. Our experiments demonstrate the sample efficiency and reproduction accuracy of our method for various expert trajectories, while remaining stable in the face of perturbations.
Generating Stable and Collision-Free Policies through Lyapunov Function Learning
Alexandre Coulombe
The need for rapid and reliable robot deployment is on the rise. Imitation Learning (IL) has become popular for producing motion planning po… (see more)licies from a set of demonstrations. However, many methods in IL are not guaranteed to produce stable policies. The generated policy may not converge to the robot target, reducing reliability, and may collide with its environment, reducing the safety of the system. Stable Estimator of Dynamic Systems (SEDS) produces stable policies by constraining the Lyapunov stability criteria during learning, but the Lyapunov candidate function had to be manually selected. In this work, we propose a novel method for learning a Lyapunov function and a collision-free policy using a single neural network model. The method can be equipped with an obstacle avoidance module for convex object pairs to guarantee no collisions. We demonstrated our method is capable of finding policies in several simulation environments and transfer to a real-world scenario.
Learning Lyapunov-Stable Polynomial Dynamical Systems Through Imitation
Amin Abyaneh
Imitation learning is a paradigm to address complex motion planning problems by learning a policy to imitate an expert’s behavior. However… (see more), relying solely on the expert’s data might lead to unsafe actions when the robot deviates from the demonstrated trajectories. Stability guarantees have previously been provided utilizing nonlinear dynamical systems, acting as high-level motion planners, in conjunction with the Lyapunov stability theorem. Yet, these methods are prone to inaccurate policies, high computational cost, sample inefficiency, or quasi stability when replicating complex and highly nonlinear trajectories. To mitigate this problem, we present an approach for learning a globally stable nonlinear dynamical system as a motion planning policy. We model the nonlinear dynamical system as a parametric polynomial and learn the polynomial’s coefficients jointly with a Lyapunov candidate. To showcase its success, we compare our method against the state of the art in simulation and conduct real-world experiments with the Kinova Gen3 Lite manipulator arm. Our experiments demonstrate the sample efficiency and reproduction accuracy of our method for various expert trajectories, while remaining stable in the face of perturbations.
Fractal impedance for passive controllers: a framework for interaction robotics
Keyhan Kouhkiloui Babarahmati
Carlo Tiseo
Joshua Smith
M. S. Erden
Michael Nalin Mistry