Portrait de Gregory Dudek n'est pas disponible

Gregory Dudek

Membre académique associé
Professeur titulaire et Directeur de recherche du laboratoire de robotique mobile, McGill University, École d'informatique
Vice-président et Chef de laboratoire de la recherche du Centre d'intelligence artificielle, Samsung AI Center in Montréal

Biographie

Gregory Dudek est professeur titulaire au Centre sur les machines intelligentes (CIM) de l’École d’informatique et directeur de recherche du Laboratoire de robotique mobile de l’Université McGill. Il est également chef de laboratoire et vice-président de la recherche du Centre d’intelligence artificielle de Samsung à Montréal. Gregory est également un membre académique associé à Mila - Institut québécois d'intelligence artificielle.

Il a écrit, seul ou en collaboration, plus de 300 articles de recherche sur des sujets tels que la description et la reconnaissance d’objets visuels, la localisation de radiofréquences (RF), la navigation et la cartographie robotiques, la conception de systèmes distribués, les télécommunications 5G et la perception biologique. Il a notamment publié le livre Computational Principles of Mobile Robotics, en collaboration avec Michael Jenkin, aux éditions Cambridge University Press. Il a présidé ou a contribué à de nombreuses conférences et activités professionnelles nationales et internationales dans les domaines de la robotique, de la détection par machine et de la vision par ordinateur. Ses recherches portent sur la perception pour la robotique mobile, la navigation et l’estimation de la position, la modélisation de l’environnement et des formes, la vision informatique et le filtrage collaboratif.

Étudiants actuels

Doctorat - McGill
Superviseur⋅e principal⋅e :
Maîtrise recherche - McGill
Superviseur⋅e principal⋅e :

Publications

Tactile Modality Fusion for Vision-Language-Action Models
We propose TacFiLM, a lightweight modality-fusion approach that integrates visual-tactile signals into vision-language-action (VLA) models. … (voir plus)While recent advances in VLA models have introduced robot policies that are both generalizable and semantically grounded, these models mainly rely on vision-based perception. Vision alone, however, cannot capture the complex interaction dynamics that occur during contact-rich manipulation, including contact forces, surface friction, compliance, and shear. While recent attempts to integrate tactile signals into VLA models often increase complexity through token concatenation or large-scale pretraining, the heavy computational demands of behavioural models necessitate more lightweight fusion strategies. To address these challenges, TacFiLM outlines a post-training finetuning approach that conditions intermediate visual features on pretrained tactile representations using feature-wise linear modulation (FiLM). Experimental results on insertion tasks demonstrate consistent improvements in success rate, direct insertion performance, completion time, and force stability across both in-distribution and out-of-distribution tasks. Together, these results support our method as an effective approach to integrating tactile signals into VLA models, improving contact-rich manipulation behaviours.
MANSION: Multi-floor lANguage-to-3D Scene generatIOn for loNg-horizon tasks
Lirong Che
Shuo Wen
Shan Huang
Chuang Wang
Yuzhe Yang
Xueqian Wang
Jian Su
Real-world robotic tasks are long-horizon and often span multiple floors, demanding rich spatial reasoning. However, existing embodied bench… (voir plus)marks are largely confined to single-floor in-house environments, failing to reflect the complexity of real-world tasks. We introduce MANSION, the first language-driven framework for generating building-scale, multi-floor 3D environments. Being aware of vertical structural constraints, MANSION generates realistic, navigable whole-building structures with diverse, human-friendly scenes, enabling the development and evaluation of cross-floor long-horizon tasks. Building on this framework, we release MansionWorld, a dataset of over 1,000 diverse buildings ranging from hospitals to offices, alongside a Task-Semantic Scene Editing Agent that customizes these environments using open-vocabulary commands to meet specific user needs. Benchmarking reveals that state-of-the-art agents degrade sharply in our settings, establishing MANSION as a critical testbed for the next generation of spatial reasoning and planning.
AMP2026: A Multi-Platform Marine Robotics Dataset for Tracking and Mapping
Shuo Wen
David Widhalm
Zhizun Wang
Junming Shi
Mariana Sosa Guzmán
Kalvik Jakkala
Bennett Carley
Elias Sokolova
Yogesh Girdhar
Monika Roznere
Jason O’Kane
Junaed Sattar
Marine environments present significant challenges for perception and autonomy due to dynamic surfaces, limited visibility, and complex inte… (voir plus)ractions between aerial, surface, and submerged sensing modalities. This paper introduces the Aerial Marine Perception Dataset (AMP2026), a multi-platform marine robotics dataset collected across multiple field deployments designed to support research in two primary areas: multi-view tracking and marine environment mapping. The dataset includes synchronized data from aerial drones, boat-mounted cameras, and submerged robotic platforms, along with associated localization and telemetry information. The goal of this work is to provide a publicly available dataset enabling research in marine perception and multi-robot observation scenarios. This paper describes the data collection methodology, sensor configurations, dataset organization, and intended research tasks supported by the dataset.
Contractive Diffusion Policies
Diffusion policies have emerged as powerful generative models for offline policy learning, whose sampling process can be rigorously characte… (voir plus)rized by a score function guiding a Stochastic Differential Equation (SDE). However, the same score-based SDE modeling that grants diffusion policies the flexibility to learn diverse behavior also incurs solver and score-matching errors, large data requirements, and inconsistencies in action generation. While less critical in image generation, these inaccuracies compound and lead to failure in continuous control settings. We introduce **C**ontractive **D**iffusion **P**olicies (CDPs) to induce contractive behavior in the diffusion sampling dynamics. Contraction pulls nearby flows closer to enhance robustness against solver and score-matching errors while reducing unwanted action variance. We develop an in-depth theoretical analysis along with a practical implementation recipe to incorporate CDPs into existing diffusion policy architectures with minimal modification and computational cost. We evaluate CDPs for offline learning by conducting extensive experiments in simulation and real world settings. Across benchmarks, CDPs often outperform baseline policies, with pronounced benefits under data scarcity. Project page: https://contractive-diffusion.github.io
Contractive Diffusion Policies: Robust Action Diffusion via Contractive Score-Based Sampling with Differential Equations
Charlotte Morissette
Anas El Houssaini
Diffusion policies have emerged as powerful generative models for offline policy learning, whose sampling process can be rigorously characte… (voir plus)rized by a score function guiding a Stochastic Differential Equation (SDE). However, the same score-based SDE modeling that grants diffusion policies the flexibility to learn diverse behavior also incurs solver and score-matching errors, large data requirements, and inconsistencies in action generation. While less critical in image generation, these inaccuracies compound and lead to failure in continuous control settings. We introduce Contractive Diffusion Policies (CDPs) to induce contractive behavior in the diffusion sampling dynamics. Contraction pulls nearby flows closer to enhance robustness against solver and score-matching errors while reducing unwanted action variance. We develop an in-depth theoretical analysis along with a practical implementation recipe to incorporate CDPs into existing diffusion policy architectures with minimal modification and computational cost. We evaluate CDPs for offline learning by conducting extensive experiments in simulation and real-world settings. Across benchmarks, CDPs often outperform baseline policies, with pronounced benefits under data scarcity.
MANSION: Multi-floor lANguage-to-3D Scene generatIOn for loNg-horizon tasks
Lirong Che
Shuo Wen
Shan Huang
Chuang Wang
Yuzhe Yang
Xueqian Wang
Jian Su
Real-world robotic tasks are long-horizon and often span multiple floors, requiring complex spatial reasoning. Existing embodied benchmarks,… (voir plus) however, are largely confined to single-floor homes, failing to evaluate agents on realistic, building-scale tasks. We introduce MANSION, a language-driven framework for generating building-scale, multi-floor 3D environments for long-horizon tasks. Using this framework, we release MansionWorld, a large-scale dataset featuring over 1,000 diverse, non-residential buildings. These environments support cross-floor skills and long-horizon task generation on reusable building layouts. Experiments show that current methods degrade sharply on our multi-floor tasks, highlighting both the challenge and the value of this setting for advancing embodied AI.
The Surprising Difficulty of Search in Model-Based Reinforcement Learning
Wei-Di Chang
Mikael Henaff
Brandon Amos
This paper investigates search in model-based reinforcement learning (RL). Conventional wisdom holds that long-term predictions and compound… (voir plus)ing errors are the primary obstacles for model-based RL. We challenge this view, showing that search is not a plug-and-play replacement for a learned policy. Surprisingly, we find that search can harm performance even when the model is highly accurate. Instead, we show that mitigating distribution shift matters more than improving model or value function accuracy. Building on this insight, we identify key techniques for enabling effective search, achieving state-of-the-art performance across multiple popular benchmark domains.
On Mobile Ad Hoc Networks for Coverage of Partially Observable Worlds
Shuo Wen
Louis-Roy Langevin
Antonio Lor'ia
Learning Heuristics for Transit Network Design and Improvement with Deep Reinforcement Learning
Andrew Holliday
Ahmed El-Geneidy
Large Pre-Trained Models for Bimanual Manipulation in 3D
Generalizable Imitation Learning Through Pre-Trained Representations
Wei-Di Chang
Francois Hogan
In this paper we leverage self-supervised vision transformer models and their emergent semantic abilities to improve the generalization abil… (voir plus)ities of imitation learning policies. We introduce BC-ViT, an imitation learning algorithm that leverages rich DINO pre-trained Visual Transformer (ViT) patch-level embeddings to obtain better generalization when learning through demonstrations. Our learner sees the world by clustering appearance features into semantic concepts, forming stable keypoints that generalize across a wide range of appearance variations and object types. We show that this representation enables generalized behaviour by evaluating imitation learning across a diverse dataset of object manipulation tasks. Our method, data and evaluation approach are made available to facilitate further study of generalization in Imitation Learners.
AIoT Smart Home via Autonomous LLM Agents
Dmitriy Rivkin
Francois Hogan
Amal Feriani
Adam Sigal
Xue Liu