Portrait of Giovanni Beltrame

Giovanni Beltrame

Affiliate Member
Full Professor, Polytechnique Montréal, Department of Computer Engineering and Software Engineering
Research Topics
Autonomous Robotics Navigation
Computer Vision
Distributed Systems
Human-Robot Interaction
Online Learning
Reinforcement Learning
Robotics
Swarm Intelligence

Biography

Giovanni Beltrame obtained his PhD in computer engineering from Politecnico di Milano in 2006, after which he worked as a microelectronics engineer at the European Space Agency on a number of projects, from radiation-tolerant systems to computer-aided design.

In 2010, he moved to Montréal, where he is currently a professor at Polytechnique Montréal in the Computer and Software Engineering Department.

Beltrame directs the Making Innovative Space Technology (MIST) Lab, where he has more than twenty-five students and postdocs under his supervision. He has completed several projects in collaboration with industry and government agencies in the area of robotics, disaster response and space exploration. He and his team have participated in several field missions with ESA, the Canadian Space Agency (CSA) and NASA, including BRAILLE, PANAGAEA-X and IGLUNA.

His research interests include the modelling and design of embedded systems, AI and robotics, and he has published his findings in top journals and conferences.

Current Students

PhD - Polytechnique Montréal
Co-supervisor :
Collaborating researcher - Polytechnique Montréal Montreal
Co-supervisor :
Master's Research - Polytechnique Montréal
Co-supervisor :
PhD - Polytechnique Montréal
Co-supervisor :
Master's Research - Université de Montréal
Co-supervisor :
PhD - Polytechnique Montréal
Co-supervisor :

Publications

BlabberSeg: Real-Time Embedded Open-Vocabulary Aerial Segmentation
Ricardo de Azambuja
Real-time aerial image segmentation plays an important role in the environmental perception of Uncrewed Aerial Vehicles (UAVs). We introduce… (see more) BlabberSeg, an optimized Vision-Language Model built on CLIPSeg for on-board, real-time processing of aerial images by UAVs. BlabberSeg improves the efficiency of CLIPSeg by reusing prompt and model features, reducing computational overhead while achieving real-time open-vocabulary aerial segmentation. We validated BlabberSeg in a safe landing scenario using the Dynamic Open-Vocabulary Enhanced SafE-Landing with Intelligence (DOVESEI) framework, which uses visual servoing and open-vocabulary segmentation. BlabberSeg reduces computational costs significantly, with a speed increase of 927.41% (16.78 Hz) on a NVIDIA Jetson Orin AGX (64GB) compared with the original CLIPSeg (1.81Hz), achieving real-time aerial segmentation with negligible loss in accuracy (2.1% as the ratio of the correctly segmented area with respect to CLIPSeg). BlabberSeg's source code is open and available online.
Active Semantic Mapping and Pose Graph Spectral Analysis for Robot Exploration
Physical Simulation for Multi-agent Multi-machine Tending
Abdalwhab Abdalwhab
David St-Onge
Multi-Objective Risk Assessment Framework for Exploration Planning Using Terrain and Traversability Analysis
Riana Gagnon Souleiman
Vivek Shankar Vardharajan
Frequency-based View Selection in Gaussian Splatting Reconstruction
Monica Li
Pierre-Yves Lajoie
Three-dimensional reconstruction is a fundamental problem in robotics perception. We examine the problem of active view selection to perform… (see more) 3D Gaussian Splatting reconstructions with as few input images as possible. Although 3D Gaussian Splatting has made significant progress in image rendering and 3D reconstruction, the quality of the reconstruction is strongly impacted by the selection of 2D images and the estimation of camera poses through Structure-from-Motion (SfM) algorithms. Current methods to select views that rely on uncertainties from occlusions, depth ambiguities, or neural network predictions directly are insufficient to handle the issue and struggle to generalize to new scenes. By ranking the potential views in the frequency domain, we are able to effectively estimate the potential information gain of new viewpoints without ground truth data. By overcoming current constraints on model architecture and efficacy, our method achieves state-of-the-art results in view selection, demonstrating its potential for efficient image-based 3D reconstruction.
Swarming Out of the Lab: Comparing Relative Localization Methods for Collective Behavior
Rafael Gomes Braga
Vivek Shankar Vardharajan
David St-Onge
Active Semantic Mapping and Pose Graph Spectral Analysis for Robot Exploration
Exploration in unknown and unstructured environments is a pivotal requirement for robotic applications. A robot’s exploration behavior can… (see more) be inherently affected by the performance of its Simultaneous Localization and Mapping (SLAM) subsystem, although SLAM and exploration are generally studied separately. In this paper, we formulate exploration as an active mapping problem and extend it with semantic information. We introduce a novel active metric-semantic SLAM approach, leveraging recent research advances in information theory and spectral graph theory: we combine semantic mutual information and the connectivity metrics of the underlying pose graph of the SLAM subsystem. We use the resulting utility function to evaluate different trajectories to select the most favorable strategy during exploration. Exploration and SLAM metrics are analyzed in experiments. Running our algorithm on the Habitat dataset, we show that, while maintaining efficiency close to the state-of-the-art exploration methods, our approach effectively increases the performance of metric-semantic SLAM with a 21% reduction in average map error and a 9% improvement in average semantic classification accuracy.
LiDAR-based Real-Time Object Detection and Tracking in Dynamic Environments
Wenqiang Du
In dynamic environments, the ability to detect and track moving objects in real-time is crucial for autonomous robots to navigate safely and… (see more) effectively. Traditional methods for dynamic object detection rely on high accuracy odometry and maps to detect and track moving objects. However, these methods are not suitable for long-term operation in dynamic environments where the surrounding environment is constantly changing. In order to solve this problem, we propose a novel system for detecting and tracking dynamic objects in real-time using only LiDAR data. By emphasizing the extraction of low-frequency components from LiDAR data as feature points for foreground objects, our method significantly reduces the time required for object clustering and movement analysis. Additionally, we have developed a tracking approach that employs intensity-based ego-motion estimation along with a sliding window technique to assess object movements. This enables the precise identification of moving objects and enhances the system's resilience to odometry drift. Our experiments show that this system can detect and track dynamic objects in real-time with an average detection accuracy of 88.7\% and a recall rate of 89.1\%. Furthermore, our system demonstrates resilience against the prolonged drift typically associated with front-end only LiDAR odometry. All of the source code, labeled dataset, and the annotation tool are available at: https://github.com/MISTLab/lidar_dynamic_objects_detection.git
Variable Time Step Reinforcement Learning for Robotic Applications
Yong Wang
Traditional reinforcement learning (RL) generates discrete control policies, assigning one action per cycle. These policies are usually impl… (see more)emented as in a fixed-frequency control loop. This rigidity presents challenges as optimal control frequency is task-dependent; suboptimal frequencies increase computational demands and reduce exploration efficiency. Variable Time Step Reinforcement Learning (VTS-RL) addresses these issues with adaptive control frequencies, executing actions only when necessary, thus reducing computational load and extending the action space to include action durations. In this paper we introduce the Multi-Objective Soft Elastic Actor-Critic (MOSEAC) method to perform VTS-RL, validating it through theoretical analysis and experimentation in simulation and on real robots. Results show faster convergence, better training results, and reduced energy consumption with respect to other variable- or fixed-frequency approaches.
MOSEAC: Streamlined Variable Time Step Reinforcement Learning
Yong Wang
Hierarchies define the scalability of robot swarms
Vivek Shankar Vardharajan
Karthik Soma
Sepand Dyanatkar
Pierre-Yves Lajoie
The emerging behaviors of swarms have fascinated scientists and gathered significant interest in the field of robotics. Traditionally, swarm… (see more)s are viewed as egalitarian, with robots sharing identical roles and capabilities. However, recent findings highlight the importance of hierarchy for deploying robot swarms more effectively in diverse scenarios. Despite nature's preference for hierarchies, the robotics field has clung to the egalitarian model, partly due to a lack of empirical evidence for the conditions favoring hierarchies. Our research demonstrates that while egalitarian swarms excel in environments proportionate to their collective sensing abilities, they struggle in larger or more complex settings. Hierarchical swarms, conversely, extend their sensing reach efficiently, proving successful in larger, more unstructured environments with fewer resources. We validated these concepts through simulations and physical robot experiments, using a complex radiation cleanup task. This study paves the way for developing adaptable, hierarchical swarm systems applicable in areas like planetary exploration and autonomous vehicles. Moreover, these insights could deepen our understanding of hierarchical structures in biological organisms.
Overcoming boundaries: Interdisciplinary challenges and opportunities in cognitive neuroscience
Arnaud Brignol
Anita Paas
Luis Sotelo-Castro
David St-Onge
Emily B.J. Coffey