Portrait de Giovanni Beltrame

Giovanni Beltrame

Membre affilié
Professeur titulaire, Polytechnique Montréal, Département de génie informatique et génie logiciel
Sujets de recherche
Apprentissage en ligne
Apprentissage par renforcement
Systèmes distribués
Vision par ordinateur

Biographie

Giovanni Beltrame a obtenu un doctorat en génie informatique du Politecnico di Milano en 2006, après quoi il a travaillé comme ingénieur en microélectronique à l'Agence spatiale européenne (ESA) sur un certain nombre de projets, allant des systèmes tolérants aux radiations à la conception assistée par ordinateur. En 2010, il s'est installé à Montréal. Il est actuellement professeur au Département de génie informatique et logiciel de Polytechnique Montréal. Il dirige notamment le laboratoire MIST, qui se consacre aux technologies spatiales, où plus de 25 étudiant·e·s et postdoctorant·e·s sont sous sa supervision. Il a réalisé plusieurs projets en collaboration avec l'industrie et les agences gouvernementales dans les domaines de la robotique, de l'intervention en cas de catastrophe et de l'exploration spatiale. Avec son équipe, il a participé à plusieurs missions sur le terrain avec l'ESA, l'Agence spatiale canadienne (ASC) et la NASA (BRAILLE, PANAGAEA-X et IGLUNA, entre autres). Ses recherches portent sur la modélisation et la conception de systèmes embarqués, l'intelligence artificielle et la robotique, sujets sur lesquels il a publié plusieurs articles dans des revues et des conférences de premier plan.

Étudiants actuels

Doctorat - Polytechnique

Publications

BlabberSeg: Real-Time Embedded Open-Vocabulary Aerial Segmentation
Haechan Mark Bong
Ricardo de Azambuja
Real-time aerial image segmentation plays an important role in the environmental perception of Uncrewed Aerial Vehicles (UAVs). We introduce… (voir plus) BlabberSeg, an optimized Vision-Language Model built on CLIPSeg for on-board, real-time processing of aerial images by UAVs. BlabberSeg improves the efficiency of CLIPSeg by reusing prompt and model features, reducing computational overhead while achieving real-time open-vocabulary aerial segmentation. We validated BlabberSeg in a safe landing scenario using the Dynamic Open-Vocabulary Enhanced SafE-Landing with Intelligence (DOVESEI) framework, which uses visual servoing and open-vocabulary segmentation. BlabberSeg reduces computational costs significantly, with a speed increase of 927.41% (16.78 Hz) on a NVIDIA Jetson Orin AGX (64GB) compared with the original CLIPSeg (1.81Hz), achieving real-time aerial segmentation with negligible loss in accuracy (2.1% as the ratio of the correctly segmented area with respect to CLIPSeg). BlabberSeg's source code is open and available online.
Multi-Objective Risk Assessment Framework for Exploration Planning Using Terrain and Traversability Analysis
Riana Gagnon Souleiman
Vivek Shankar Vardharajan
Frequency-based View Selection in Gaussian Splatting Reconstruction
Monica Li
Pierre-Yves Lajoie
Three-dimensional reconstruction is a fundamental problem in robotics perception. We examine the problem of active view selection to perform… (voir plus) 3D Gaussian Splatting reconstructions with as few input images as possible. Although 3D Gaussian Splatting has made significant progress in image rendering and 3D reconstruction, the quality of the reconstruction is strongly impacted by the selection of 2D images and the estimation of camera poses through Structure-from-Motion (SfM) algorithms. Current methods to select views that rely on uncertainties from occlusions, depth ambiguities, or neural network predictions directly are insufficient to handle the issue and struggle to generalize to new scenes. By ranking the potential views in the frequency domain, we are able to effectively estimate the potential information gain of new viewpoints without ground truth data. By overcoming current constraints on model architecture and efficacy, our method achieves state-of-the-art results in view selection, demonstrating its potential for efficient image-based 3D reconstruction.
Swarming Out of the Lab: Comparing Relative Localization Methods for Collective Behavior
Rafael Gomes Braga
Vivek Shankar Vardharajan
David St-Onge
Learning Multi-agent Multi-machine Tending by Mobile Robots
Abdalwhab Abdalwhab
David St-Onge
Robotics can help address the growing worker shortage challenge of the manufacturing industry. As such, machine tending is a task collaborat… (voir plus)ive robots can tackle that can also highly boost productivity. Nevertheless, existing robotics systems deployed in that sector rely on a fixed single-arm setup, whereas mobile robots can provide more flexibility and scalability. In this work, we introduce a multi-agent multi-machine tending learning framework by mobile robots based on Multi-agent Reinforcement Learning (MARL) techniques with the design of a suitable observation and reward. Moreover, an attention-based encoding mechanism is developed and integrated into Multi-agent Proximal Policy Optimization (MAPPO) algorithm to boost its performance for machine tending scenarios. Our model (AB-MAPPO) outperformed MAPPO in this new challenging scenario in terms of task success, safety, and resources utilization. Furthermore, we provided an extensive ablation study to support our various design decisions.
Active Semantic Mapping and Pose Graph Spectral Analysis for Robot Exploration
Rongge Zhang
Haechan Mark Bong
LiDAR-based Real-Time Object Detection and Tracking in Dynamic Environments
Wenqiang Du
In dynamic environments, the ability to detect and track moving objects in real-time is crucial for autonomous robots to navigate safely and… (voir plus) effectively. Traditional methods for dynamic object detection rely on high accuracy odometry and maps to detect and track moving objects. However, these methods are not suitable for long-term operation in dynamic environments where the surrounding environment is constantly changing. In order to solve this problem, we propose a novel system for detecting and tracking dynamic objects in real-time using only LiDAR data. By emphasizing the extraction of low-frequency components from LiDAR data as feature points for foreground objects, our method significantly reduces the time required for object clustering and movement analysis. Additionally, we have developed a tracking approach that employs intensity-based ego-motion estimation along with a sliding window technique to assess object movements. This enables the precise identification of moving objects and enhances the system's resilience to odometry drift. Our experiments show that this system can detect and track dynamic objects in real-time with an average detection accuracy of 88.7\% and a recall rate of 89.1\%. Furthermore, our system demonstrates resilience against the prolonged drift typically associated with front-end only LiDAR odometry. All of the source code, labeled dataset, and the annotation tool are available at: https://github.com/MISTLab/lidar_dynamic_objects_detection.git
Variable Time Step Reinforcement Learning for Robotic Applications
Dong Wang
Traditional reinforcement learning (RL) generates discrete control policies, assigning one action per cycle. These policies are usually impl… (voir plus)emented as in a fixed-frequency control loop. This rigidity presents challenges as optimal control frequency is task-dependent; suboptimal frequencies increase computational demands and reduce exploration efficiency. Variable Time Step Reinforcement Learning (VTS-RL) addresses these issues with adaptive control frequencies, executing actions only when necessary, thus reducing computational load and extending the action space to include action durations. In this paper we introduce the Multi-Objective Soft Elastic Actor-Critic (MOSEAC) method to perform VTS-RL, validating it through theoretical analysis and experimentation in simulation and on real robots. Results show faster convergence, better training results, and reduced energy consumption with respect to other variable- or fixed-frequency approaches.
MOSEAC: Streamlined Variable Time Step Reinforcement Learning
Dong Wang
Hierarchies define the scalability of robot swarms
Vivek Shankar Vardharajan
Karthik Soma
Sepand Dyanatkar
Pierre-Yves Lajoie
The emerging behaviors of swarms have fascinated scientists and gathered significant interest in the field of robotics. Traditionally, swarm… (voir plus)s are viewed as egalitarian, with robots sharing identical roles and capabilities. However, recent findings highlight the importance of hierarchy for deploying robot swarms more effectively in diverse scenarios. Despite nature's preference for hierarchies, the robotics field has clung to the egalitarian model, partly due to a lack of empirical evidence for the conditions favoring hierarchies. Our research demonstrates that while egalitarian swarms excel in environments proportionate to their collective sensing abilities, they struggle in larger or more complex settings. Hierarchical swarms, conversely, extend their sensing reach efficiently, proving successful in larger, more unstructured environments with fewer resources. We validated these concepts through simulations and physical robot experiments, using a complex radiation cleanup task. This study paves the way for developing adaptable, hierarchical swarm systems applicable in areas like planetary exploration and autonomous vehicles. Moreover, these insights could deepen our understanding of hierarchical structures in biological organisms.
Overcoming boundaries: Interdisciplinary challenges and opportunities in cognitive neuroscience
Arnaud Brignol
Anita Paas
Luis Sotelo-Castro
David St-Onge
Emily B.J. Coffey
Learning Control Barrier Functions and their application in Reinforcement Learning: A Survey
Maeva Guerrier
Hassan Fouad
Reinforcement learning is a powerful technique for developing new robot behaviors. However, typical lack of safety guarantees constitutes a … (voir plus)hurdle for its practical application on real robots. To address this issue, safe reinforcement learning aims to incorporate safety considerations, enabling faster transfer to real robots and facilitating lifelong learning. One promising approach within safe reinforcement learning is the use of control barrier functions. These functions provide a framework to ensure that the system remains in a safe state during the learning process. However, synthesizing control barrier functions is not straightforward and often requires ample domain knowledge. This challenge motivates the exploration of data-driven methods for automatically defining control barrier functions, which is highly appealing. We conduct a comprehensive review of the existing literature on safe reinforcement learning using control barrier functions. Additionally, we investigate various techniques for automatically learning the Control Barrier Functions, aiming to enhance the safety and efficacy of Reinforcement Learning in practical robot applications.