Portrait of Giovanni Beltrame

Giovanni Beltrame

Affiliate Member
Full Professor, Polytechnique Montréal, Department of Computer Engineering and Software Engineering
Research Topics
Autonomous Robotics Navigation
Computer Vision
Distributed Systems
Human-Robot Interaction
Online Learning
Reinforcement Learning
Robotics
Swarm Intelligence

Biography

Giovanni Beltrame obtained his PhD in computer engineering from Politecnico di Milano in 2006, after which he worked as a microelectronics engineer at the European Space Agency on a number of projects, from radiation-tolerant systems to computer-aided design.

In 2010, he moved to Montréal, where he is currently a professor at Polytechnique Montréal in the Computer and Software Engineering Department.

Beltrame directs the Making Innovative Space Technology (MIST) Lab, where he has more than twenty-five students and postdocs under his supervision. He has completed several projects in collaboration with industry and government agencies in the area of robotics, disaster response and space exploration. He and his team have participated in several field missions with ESA, the Canadian Space Agency (CSA) and NASA, including BRAILLE, PANAGAEA-X and IGLUNA.

His research interests include the modelling and design of embedded systems, AI and robotics, and he has published his findings in top journals and conferences.

Current Students

PhD - Polytechnique Montréal
Co-supervisor :
Collaborating researcher - Polytechnique Montréal Montreal
Co-supervisor :
Master's Research - Polytechnique Montréal
Co-supervisor :
PhD - Polytechnique Montréal
Co-supervisor :
Master's Research - Université de Montréal
Co-supervisor :
PhD - Polytechnique Montréal
Co-supervisor :

Publications

Neural Incremental Dynamic Inversion Control of a Multirotor Robotic Airship
Ely Carneiro de Paiva
José Raul Azinheira
Rafael de Angelis Cordeiro
José Reginaldo H. Carvalho
Apolo Marton
PEACE: Prompt Engineering Automation for CLIPSeg Enhancement for Safe-Landing Zone Segmentation
Rongge Zhang
Antoine Robillard
Safe landing is essential in robotics applications, from industrial settings to space exploration. As artificial intelligence advances, we h… (see more)ave developed PEACE (Prompt Engineering Automation for CLIPSeg Enhancement), a system that automatically generates and refines prompts for identifying landing zones in changing environments. Traditional approaches using fixed prompts for open-vocabulary models struggle with environmental changes and can lead to dangerous outcomes when conditions are not represented in the predefined prompts. PEACE addresses this limitation by dynamically adapting to shifting data distributions. Our key innovation is the dual segmentation of safe and unsafe landing zones, allowing the system to refine the results by removing unsafe areas from potential landing sites. Using only monocular cameras and image segmentation, PEACE can safely guide descent operations from 100 meters to altitudes as low as 20 meters. The testing shows that PEACE significantly outperforms the standard CLIP and CLIPSeg prompting methods, improving the successful identification of safe landing zones from 57% to 92%. We have also demonstrated enhanced performance when replacing CLIPSeg with FastSAM. The complete source code is available as an open-source software 1.
BlabberSeg: Real-Time Embedded Open-Vocabulary Aerial Segmentation
Ricardo de Azambuja
Real-time aerial image segmentation plays an important role in the environmental perception of Uncrewed Aerial Vehicles (UAVs). We introduce… (see more) BlabberSeg, an optimized Vision-Language Model built on CLIPSeg for on-board, real-time processing of aerial images by UAVs. BlabberSeg improves the efficiency of CLIPSeg by reusing prompt and model features, reducing computational overhead while achieving real-time open-vocabulary aerial segmentation. We validated BlabberSeg in a safe landing scenario using the Dynamic Open-Vocabulary Enhanced SafE-Landing with Intelligence (DOVESEI) framework, which uses visual servoing and open-vocabulary segmentation. BlabberSeg reduces computational costs significantly, with a speed increase of 927.41% (16.78 Hz) on a NVIDIA Jetson Orin AGX (64GB) compared with the original CLIPSeg (1.81Hz), achieving real-time aerial segmentation with negligible loss in accuracy (2.1% as the ratio of the correctly segmented area with respect to CLIPSeg). BlabberSeg's source code is open and available online.
Active Semantic Mapping and Pose Graph Spectral Analysis for Robot Exploration
Physical Simulation for Multi-agent Multi-machine Tending
Abdalwhab Abdalwhab
David St-Onge
Beyond the lab: Feasibility of cognitive neuroscience data collection during a speleological expedition
Anita Paas
Hugo R. Jourde
Arnaud Brignol
Marie-Anick Savard
Zseyvfin Eyqvelle
Samuel Bassetto
Emily B.J. Coffey
Multi-Objective Risk Assessment Framework for Exploration Planning Using Terrain and Traversability Analysis
Riana Gagnon Souleiman
Vivek Shankar Vardharajan
Frequency-based View Selection in Gaussian Splatting Reconstruction
Monica Li
Pierre-Yves Lajoie
Three-dimensional reconstruction is a fundamental problem in robotics perception. We examine the problem of active view selection to perform… (see more) 3D Gaussian Splatting reconstructions with as few input images as possible. Although 3D Gaussian Splatting has made significant progress in image rendering and 3D reconstruction, the quality of the reconstruction is strongly impacted by the selection of 2D images and the estimation of camera poses through Structure-from-Motion (SfM) algorithms. Current methods to select views that rely on uncertainties from occlusions, depth ambiguities, or neural network predictions directly are insufficient to handle the issue and struggle to generalize to new scenes. By ranking the potential views in the frequency domain, we are able to effectively estimate the potential information gain of new viewpoints without ground truth data. By overcoming current constraints on model architecture and efficacy, our method achieves state-of-the-art results in view selection, demonstrating its potential for efficient image-based 3D reconstruction.
Swarming Out of the Lab: Comparing Relative Localization Methods for Collective Behavior
Rafael Gomes Braga
Vivek Shankar Vardharajan
David St-Onge
Concurrent product layout design optimization and dependency management using a modified NSGA-III approach
Yann-Seing Law-Kam Cio
Aurelian Vadean
Abolfazl Mohebbi
Sofiane Achiche
The complexity of mechatronic systems has increased with the significant advancements of technology in their components which makes their de… (see more)sign more challenging. This is due to the need for incorporating expertise from different domains as well as the increased number and complexity of components integrated into the product. To alleviate the burden of designing such products, many industries and researchers are attracted to the concept of modularization which is to identify a subset of system components that can form a module. To achieve this, a novel product-related dependency management approach is proposed in this paper with the support of an augmented design structure matrix. This approach makes it possible to model positive and negative dependencies and to compute the combination potency between components to form modules. This approach is then integrated into a modified non-dominated sorting genetic algorithm III to concurrently optimize the design and identify the modules. The methodology is exemplified through the case study of a layout design of an automatic greenhouse. By applying the proposed methodology to the case study, it was possible to generate concepts that decreased the number of modules from 9 down to 4 while ensuring the optimization of the design performance.
LiDAR-based Real-Time Object Detection and Tracking in Dynamic Environments
Wenqiang Du
In dynamic environments, the ability to detect and track moving objects in real-time is crucial for autonomous robots to navigate safely and… (see more) effectively. Traditional methods for dynamic object detection rely on high accuracy odometry and maps to detect and track moving objects. However, these methods are not suitable for long-term operation in dynamic environments where the surrounding environment is constantly changing. In order to solve this problem, we propose a novel system for detecting and tracking dynamic objects in real-time using only LiDAR data. By emphasizing the extraction of low-frequency components from LiDAR data as feature points for foreground objects, our method significantly reduces the time required for object clustering and movement analysis. Additionally, we have developed a tracking approach that employs intensity-based ego-motion estimation along with a sliding window technique to assess object movements. This enables the precise identification of moving objects and enhances the system's resilience to odometry drift. Our experiments show that this system can detect and track dynamic objects in real-time with an average detection accuracy of 88.7\% and a recall rate of 89.1\%. Furthermore, our system demonstrates resilience against the prolonged drift typically associated with front-end only LiDAR odometry. All of the source code, labeled dataset, and the annotation tool are available at: https://github.com/MISTLab/lidar_dynamic_objects_detection.git
Overcoming Boundaries: Interdisciplinary Challenges and Opportunities in Cognitive Neuroscience
Arnaud Brignol
Anita Paas
Luis Sotelo-Castro
David St-Onge
Emily B.J. Coffey