Portrait de Gregory Dudek n'est pas disponible

Gregory Dudek

Membre académique associé
Professeur titulaire, McGill University, École d'informatique
Vice-président et Chef de laboratoire de la recherche du Centre d'intelligence artificielle, Samsung AI Center in Montréal

Biographie

Gregory Dudek est professeur au Centre sur les machines intelligentes (CIM) de l’École d’informatique de l’Université McGill et directeur de recherche du Laboratoire de robotique mobile de l’université. Il est également chef de laboratoire et vice-président de la recherche du Centre d’intelligence artificielle de Samsung à Montréal.

Il a écrit, seul ou en collaboration, plus de 300 articles de recherche sur des sujets tels que la description et la reconnaissance d’objets visuels, la localisation de radiofréquences (RF), la navigation et la cartographie robotiques, la conception de systèmes distribués, les télécommunications 5G et la perception biologique. Il a notamment publié le livre Computational Principles of Mobile Robotics, en collaboration avec Michael Jenkin, aux éditions Cambridge University Press. Il a présidé ou a contribué à de nombreuses conférences et activités professionnelles nationales et internationales dans les domaines de la robotique, de la détection par machine et de la vision par ordinateur. Ses recherches portent sur la perception pour la robotique mobile, la navigation et l’estimation de la position, la modélisation de l’environnement et des formes, la vision informatique et le filtrage collaboratif.

Étudiants actuels

Doctorat - McGill University
Superviseur⋅e principal⋅e :
Postdoctorat - McGill University
Superviseur⋅e principal⋅e :

Publications

Working Backwards: Learning to Place by Picking
Oliver Limoyo
Abhisek Konar
Trevor Ablett
Jonathan Kelly
Francois R. Hogan
PEOPLEx: PEdestrian Opportunistic Positioning LEveraging IMU, UWB, BLE and WiFi
Pierre-Yves Lajoie
Bobak H. Baghi
Sachini Herath
Francois R. Hogan
This paper advances the field of pedestrian localization by introducing a unifying framework for opportunistic positioning based on nonlinea… (voir plus)r factor graph optimization. While many existing approaches assume constant availability of one or multiple sensing signals, our methodology employs IMU-based pedestrian inertial navigation as the backbone for sensor fusion, opportunistically integrating Ultra-Wideband (UWB), Bluetooth Low Energy (BLE), and WiFi signals when they are available in the environment. The proposed PEOPLEx framework is designed to incorporate sensing data as it becomes available, operating without any prior knowledge about the environment (e.g. anchor locations, radio frequency maps, etc.). Our contributions are twofold: 1) we introduce an opportunistic multi-sensor and real-time pedestrian positioning framework fusing the available sensor measurements; 2) we develop novel factors for adaptive scaling and coarse loop closures, significantly improving the precision of indoor positioning. Experimental validation confirms that our approach achieves accurate localization estimates in real indoor scenarios using commercial smartphones.
A Study of Human-Robot Handover through Human-Human Object Transfer
Charlotte Morissette
Bobak H. Baghi
Francois R. Hogan
In this preliminary study, we investigate changes in handover behaviour when transferring hazardous objects with the help of a high-resoluti… (voir plus)on touch sensor. Participants were asked to hand over a safe and hazardous object (a full cup and an empty cup) while instrumented with a modified STS sensor. Our data shows a clear distinction in the length of handover for the full cup vs the empty one, with the former being slower. Sensor data further suggests a change in tactile behaviour dependent on the object's risk factor. The results of this paper motivate a deeper study of tactile factors which could characterize a risky handover, allowing for safer human-robot interactions in the future.
Generalizable Imitation Learning Through Pre-Trained Representations
Wei-Di Chang
Francois R. Hogan
In this paper we leverage self-supervised vision transformer models and their emergent semantic abilities to improve the generalization abil… (voir plus)ities of imitation learning policies. We introduce BC-ViT, an imitation learning algorithm that leverages rich DINO pre-trained Visual Transformer (ViT) patch-level embeddings to obtain better generalization when learning through demonstrations. Our learner sees the world by clustering appearance features into semantic concepts, forming stable keypoints that generalize across a wide range of appearance variations and object types. We show that this representation enables generalized behaviour by evaluating imitation learning across a diverse dataset of object manipulation tasks. Our method, data and evaluation approach are made available to facilitate further study of generalization in Imitation Learners.
Multimodal and Force-Matched Imitation Learning with a See-Through Visuotactile Sensor
Trevor Ablett
Oliver Limoyo
Adam Sigal
Affan Jilani
Jonathan Kelly
Francois R. Hogan
Kinesthetic Teaching is a popular approach to collecting expert robotic demonstrations of contact-rich tasks for imitation learning (IL), bu… (voir plus)t it typically only measures motion, ignoring the force placed on the environment by the robot. Furthermore, contact-rich tasks require accurate sensing of both reaching and touching, which can be difficult to provide with conventional sensing modalities. We address these challenges with a See-Through-your-Skin (STS) visuotactile sensor, using the sensor both (i) as a measurement tool to improve kinesthetic teaching, and (ii) as a policy input in contact-rich door manipulation tasks. An STS sensor can be switched between visual and tactile modes by leveraging a semi-transparent surface and controllable lighting, allowing for both pre-contact visual sensing and during-contact tactile sensing with a single sensor. First, we propose tactile force matching, a methodology that enables a robot to match forces read during kinesthetic teaching using tactile signals. Second, we develop a policy that controls STS mode switching, allowing a policy to learn the appropriate moment to switch an STS from its visual to its tactile mode. Finally, we study multiple observation configurations to compare and contrast the value of visual and tactile data from an STS with visual data from a wrist-mounted eye-in-hand camera. With over 3,000 test episodes from real-world manipulation experiments, we find that the inclusion of force matching raises average policy success rates by 62.5%, STS mode switching by 30.3%, and STS data as a policy input by 42.5%. Our results highlight the utility of see-through tactile sensing for IL, both for data collection to allow force matching, and for policy execution to allow accurate task feedback.
SAGE: Smart home Agent with Grounded Execution
Dmitriy Rivkin
Francois RobertHogan
Amal Feriani
Abhisek Konar
Adam Sigal
Steve Liu
Realizing XR Applications Using 5G-Based 3D Holographic Communication and Mobile Edge Computing
Dun Yuan
Ekram Hossain
Di Wu
3D holographic communication has the potential to revolutionize the way people interact with each other in virtual spaces, offering immersiv… (voir plus)e and realistic experiences. However, demands for high data rates, extremely low latency, and high computations to enable this technology pose a significant challenge. To address this challenge, we propose a novel job scheduling algorithm that leverages Mobile Edge Computing (MEC) servers in order to minimize the total latency in 3D holographic communication. One of the motivations for this work is to prevent the uncanny valley effect, which can occur when the latency hinders the seamless and real-time rendering of holographic content, leading to a less convincing and less engaging user experience. Our proposed algorithm dynamically allocates computation tasks to MEC servers, considering the network conditions, computational capabilities of the servers, and the requirements of the 3D holographic communication application. We conduct extensive experiments to evaluate the performance of our algorithm in terms of latency reduction, and the results demonstrate that our approach significantly outperforms other baseline methods. Furthermore, we present a practical scenario involving Augmented Reality (AR), which not only illustrates the applicability of our algorithm but also highlights the importance of minimizing latency in achieving high-quality holographic views. By efficiently distributing the computation workload among MEC servers and reducing the overall latency, our proposed algorithm enhances the user experience in 3D holographic communications and paves the way for the widespread adoption of this technology in various applications, such as telemedicine, remote collaboration, and entertainment.
Adaptive Dynamic Programming for Energy-Efficient Base Station Cell Switching
Junliang Luo
Yi Tian Xu
Di Wu
M. Jenkin
Energy saving in wireless networks is growing in importance due to increasing demand for evolving new-gen cellular networks, environmental a… (voir plus)nd regulatory concerns, and potential energy crises arising from geopolitical tensions. In this work, we propose an approximate dynamic programming (ADP)-based method coupled with online optimization to switch on/off the cells of base stations to reduce network power consumption while maintaining adequate Quality of Service (QoS) metrics. We use a multilayer perceptron (MLP) given each state-action pair to predict the power consumption to approximate the value function in ADP for selecting the action with optimal expected power saved. To save the largest possible power consumption without deteriorating QoS, we include another MLP to predict QoS and a long short-term memory (LSTM) for predicting handovers, incorporated into an online optimization algorithm producing an adaptive QoS threshold for filtering cell switching actions based on the overall QoS history. The performance of the method is evaluated using a practical network simulator with various real-world scenarios with dynamic traffic patterns.
Imitation Learning from Observation through Optimal Transport
Wei-Di Chang
Scott Fujimoto
A Generic Framework for Byzantine-Tolerant Consensus Achievement in Robot Swarms
Hanqing Zhao
Alexandre Pacheco
Volker Strobel
Andreagiovanni Reina
Marco Dorigo
Recent studies show that some security features that blockchains grant to decentralized networks on the internet can be ported to swarm robo… (voir plus)tics. Although the integration of blockchain technology and swarm robotics shows great promise, thus far, research has been limited to proof-of-concept scenarios where the blockchain-based mechanisms are tailored to a particular swarm task and operating environment. In this study, we propose a generic framework based on a blockchain smart contract that enables robot swarms to achieve secure consensus in an arbitrary observation space. This means that our framework can be customized to fit different swarm robotics missions, while providing methods to identify and neutralize Byzantine robots, that is, robots which exhibit detrimental behaviours stemming from faults or malicious tampering.
Uncertainty-aware hybrid paradigm of nonlinear MPC and model-based RL for offroad navigation: Exploration of transformers in the predictive model
Faraz Lotfi
Khalil Virji
Farnoosh Faraji
Lucas Berry
Andrew Holliday
In this paper, we investigate a hybrid scheme that combines nonlinear model predictive control (MPC) and model-based reinforcement learning … (voir plus)(RL) for navigation planning of an autonomous model car across offroad, unstructured terrains without relying on predefined maps. Our innovative approach takes inspiration from BADGR, an LSTM-based network that primarily concentrates on environment modeling, but distinguishes itself by substituting LSTM modules with transformers to greatly elevate the performance our model. Addressing uncertainty within the system, we train an ensemble of predictive models and estimate the mutual information between model weights and outputs, facilitating dynamic horizon planning through the introduction of variable speeds. Further enhancing our methodology, we incorporate a nonlinear MPC controller that accounts for the intricacies of the vehicle's model and states. The model-based RL facet produces steering angles and quantifies inherent uncertainty. At the same time, the nonlinear MPC suggests optimal throttle settings, striking a balance between goal attainment speed and managing model uncertainty influenced by velocity. In the conducted studies, our approach excels over the existing baseline by consistently achieving higher metric values in predicting future events and seamlessly integrating the vehicle's kinematic model for enhanced decision-making. The code and the evaluation data are available at https://github.com/FARAZLOTFI/offroad_autonomous_navigation/).
Zero-Shot Fault Detection for Manipulators Through Bayesian Inverse Reinforcement Learning
We consider the detection of faults in robotic manipulators, with particular emphasis on faults that have not been observed or identified in… (voir plus) advance, which naturally includes those that occur very infrequently. Recent studies indicate that the reward function obtained through Inverse Reinforcement Learning (IRL) can help detect anomalies caused by faults in a control system (i.e. fault detection). Current IRL methods for fault detection, however, either use a linear reward representation or require extensive sampling from the environment to estimate the policy, rendering them inappropriate for safety-critical situations where sampling of failure observations via fault injection can be expensive and dangerous. To address this issue, this paper proposes a zero-shot and exogenous fault detector based on an approximate variational reward imitation learning (AVRIL) structure. The fault detector recovers a reward signal as a function of externally observable information to describe the normal operation, which can then be used to detect anomalies caused by faults. Our method incorporates expert knowledge through a customizable reward prior distribution, allowing the fault detector to learn the reward solely from normal operation samples, without the need for a simulator or costly interactions with the environment. We evaluate our approach for exogenous partial fault detection in multi-stage robotic manipulator tasks, comparing it with several baseline methods. The results demonstrate that our method more effectively identifies unseen faults even when they occur within just three controller time steps.