Portrait de Marco Pedersoli

Marco Pedersoli

Membre affilié
Professeur associé, École de technologie suprérieure
Sujets de recherche
Apprentissage de représentations
Apprentissage multimodal
Apprentissage profond
Généralisation
Imagerie satellite
Modèles génératifs
Robustesse
Supervision faible
Systèmes de gestion de l'énergie des bâtiments
Vision et langage
Vision par ordinateur

Biographie

Je suis professeur associé à l'ÉTS Montréal, membre du LIVIA (le Laboratoire d'Imagerie, Vision et Intelligence Artificielle), et membre du Laboratoire International des Systèmes d'Apprentissage (ILLS). Je suis également membre d'ELLIS, le réseau européen d'excellence en IA. Depuis 2021, je suis co-titulaire de la chaire de recherche industrielle Distech sur les réseaux neuronaux intégrés pour le contrôle des bâtiments connectés.

Mes recherches sont centrées sur les méthodes et algorithmes de Deep Learning, avec un accent sur la reconnaissance visuelle, l'interprétation automatique et la compréhension des images et des vidéos. L'un des principaux objectifs de mon travail est de faire progresser l'intelligence artificielle en minimisant deux facteurs critiques : la charge de calcul et la nécessité d'une supervision humaine. Ces réductions sont essentielles pour une IA évolutive, permettant des systèmes plus efficaces, adaptatifs et intégrés. Dans mes travaux récents, j'ai contribué au développement de réseaux neuronaux pour les bâtiments intelligents, en intégrant des solutions basées sur l'IA pour améliorer l'efficacité énergétique et le confort dans les environnements intelligents.

Publications

Domain Generalization by Rejecting Extreme Augmentations
Masih Aminbeidokhti
Fidel A. Guerrero Peña
Heitor Rapela Medeiros
Thomas Dubail
Eric Granger
HalluciDet: Hallucinating RGB Modality for Person Detection Through Privileged Information
Heitor Rapela Medeiros
Fidel A. Guerrero Peña
Masih Aminbeidokhti
Thomas Dubail
Eric Granger
A powerful way to adapt a visual recognition model to a new domain is through image translation. However, common image translation approache… (voir plus)s only focus on generating data from the same distribution as the target domain. Given a cross-modal application, such as pedestrian detection from aerial images, with a considerable shift in data distribution between infrared (IR) to visible (RGB) images, a translation focused on generation might lead to poor performance as the loss focuses on irrelevant details for the task. In this paper, we propose HalluciDet, an IR-RGB image translation model for object detection. Instead of focusing on reconstructing the original image on the IR modality, it seeks to reduce the detection loss of an RGB detector, and therefore avoids the need to access RGB data. This model produces a new image representation that enhances objects of interest in the scene and greatly improves detection performance. We empirically compare our approach against state-of-the-art methods for image translation and for fine-tuning on IR, and show that our HalluciDet improves detection accuracy in most cases by exploiting the privileged information encoded in a pre-trained RGB detector. Code: https://github.com/heitorrapela/HalluciDet.
Multi-Source Domain Adaptation for Object Detection with Prototype-based Mean Teacher
Atif Belal
Akhil Meethal
Francisco Perdigon Romero
Eric Granger
Attention-based Class-Conditioned Alignment for Multi-Source Domain Adaptive Object Detection
Atif Belal
Akhil Meethal
Francisco Perdigon Romero
Eric Granger
Attention-based Class-Conditioned Alignment for Multi-Source Domain Adaptive Object Detection
Atif Belal
Akhil Meethal
Francisco Perdigon Romero
Eric Granger
Evaluating Supervision Levels Trade-Offs for Infrared-Based People Counting
David Latortue
Moetez Kdayem
Fidel A. Guerrero Peña
Eric Granger
Object detection models are commonly used for people counting (and localization) in many applications but require a dataset with costly boun… (voir plus)ding box annotations for training. Given the importance of privacy in people counting, these models rely more and more on infrared images, making the task even harder. In this paper, we explore how weaker levels of supervision affect the performance of deep person counting architectures for image classification and point-level localization. Our experiments indicate that counting people using a convolutional neural network with image-level annotation achieves a level of accuracy that is competitive with YOLO detectors and point-level localization models yet provides a higher frame rate and a simi-lar amount of model parameters. Our code is available at: https://github.com/tortueTortue/IRPeopleCounting.
Joint Multimodal Transformer for Dimensional Emotional Recognition in the Wild
Paul Waligora
Muhammad Osama Zeeshan
Muhammad Haseeb Aslam
Soufiane Belharbi
Alessandro Lameiras Koerich
Simon Bacon
Eric Granger
Audiovisual emotion recognition (ER) in videos has immense potential over unimodal performance. It effectively leverages the inter-and intra… (voir plus)-modal dependencies between visual and auditory modalities. This work proposes a novel audio-visual emotion recognition system utilizing a joint multimodal transformer architecture with key-based cross-attention. This framework aims to exploit the complementary nature of audio and visual cues (facial expressions and vocal patterns) in videos, leading to superior performance compared to solely relying on a single modality. The proposed model leverages separate backbones for capturing intra-modal temporal dependencies within each modality (audio and visual). Subse-quently, a joint multimodal transformer architecture integrates the individual modality embeddings, enabling the model to effectively capture inter-modal (between audio and visual) and intra-modal (within each modality) relationships. Extensive evaluations on the challenging Affwild2 dataset demonstrate that the proposed model significantly outperforms baseline and state-of-the-art methods in ER tasks.
Do not trust what you trust: Miscalibration in Semi-supervised Learning
Shambhavi Mishra
Balamurali Murugesan
Ismail Ben Ayed
Jose Dolz
State-of-the-art semi-supervised learning (SSL) approaches rely on highly confident predictions to serve as pseudo-labels that guide the tra… (voir plus)ining on unlabeled samples. An inherent drawback of this strategy stems from the quality of the uncertainty estimates, as pseudo-labels are filtered only based on their degree of uncertainty, regardless of the correctness of their predictions. Thus, assessing and enhancing the uncertainty of network predictions is of paramount importance in the pseudo-labeling process. In this work, we empirically demonstrate that SSL methods based on pseudo-labels are significantly miscalibrated, and formally demonstrate the minimization of the min-entropy, a lower bound of the Shannon entropy, as a potential cause for miscalibration. To alleviate this issue, we integrate a simple penalty term, which enforces the logit distances of the predictions on unlabeled samples to remain low, preventing the network predictions to become overconfident. Comprehensive experiments on a variety of SSL image classification benchmarks demonstrate that the proposed solution systematically improves the calibration performance of relevant SSL models, while also enhancing their discriminative power, being an appealing addition to tackle SSL tasks.
StarVector: Generating Scalable Vector Graphics Code from Images and Text
Juan A. Rodriguez
Abhay Puri
Issam Hadj Laradji
Pau Rodriguez
Sai Rajeswar
David Vazquez
Scalable Vector Graphics (SVGs) are vital for modern image rendering due to their scalability and versatility. Previous SVG generation metho… (voir plus)ds have focused on curve-based vectorization, lacking semantic understanding, often producing artifacts, and struggling with SVG primitives beyond path curves. To address these issues, we introduce StarVector, a multimodal large language model for SVG generation. It performs image vectorization by understanding image semantics and using SVG primitives for compact, precise outputs. Unlike traditional methods, StarVector works directly in the SVG code space, leveraging visual understanding to apply accurate SVG primitives. To train StarVector, we create SVG-Stack, a diverse dataset of 2M samples that enables generalization across vectorization tasks and precise use of primitives like ellipses, polygons, and text. We address challenges in SVG evaluation, showing that pixel-based metrics like MSE fail to capture the unique qualities of vector graphics. We introduce SVG-Bench, a benchmark across 10 datasets, and 3 tasks: Image-to-SVG, Text-to-SVG generation, and diagram generation. Using this setup, StarVector achieves state-of-the-art performance, producing more compact and semantically rich SVGs.
StarVector: Generating Scalable Vector Graphics Code from Images and Text
Juan A. Rodriguez
Abhay Puri
Issam Hadj Laradji
Pau Rodriguez
Sai Rajeswar
David Vazquez
StarVector: Generating Scalable Vector Graphics Code from Images
Juan A. Rodriguez
Abhay Puri
Issam Hadj Laradji
Pau Rodriguez
David Vazquez
Sai Rajeswar
Scalable Vector Graphics (SVGs) have become integral in modern image rendering applications due to their infinite scalability in resolution,… (voir plus) versatile usability, and editing capabilities. SVGs are particularly popular in the fields of web development and graphic design. Existing approaches for SVG modeling using deep learning often struggle with generating complex SVGs and are restricted to simpler ones that require extensive processing and simplification. This paper introduces StarVector, a multimodal SVG generation model that effectively integrates Code Generation Large Language Models (CodeLLMs) and vision models. Our approach utilizes a CLIP image encoder to extract visual representations from pixel-based images, which are then transformed into visual tokens via an adapter module. These visual tokens are pre-pended to the SVG token embeddings, and the sequence is modeled by the StarCoder model using next-token prediction, effectively learning to align the visual and code tokens. This enables StarVector to generate unrestricted SVGs that accurately represent pixel images. To evaluate StarVector's performance, we present SVG-Bench, a comprehensive benchmark for evaluating SVG methods across multiple datasets and relevant metrics. Within this benchmark, we introduce novel datasets including SVG-Stack, a large-scale dataset of real-world SVG examples, and use it to pre-train StarVector as a large foundation model for SVGs. Our results demonstrate significant enhancements in visual quality and complexity handling over current methods, marking a notable advancement in SVG generation technology. Code and models: https://github.com/joanrod/star-vector
StarVector: Generating Scalable Vector Graphics Code from Images and Text
Juan A. Rodriguez
Abhay Puri
Issam Hadj Laradji
Pau Rodriguez
Sai Rajeswar
David Vazquez
Scalable Vector Graphics (SVGs) are vital for modern image rendering due to their scalability and versatility. Previous SVG generation metho… (voir plus)ds have focused on curve-based vectorization, lacking semantic understanding, often producing artifacts, and struggling with SVG primitives beyond path curves. To address these issues, we introduce StarVector, a multimodal large language model for SVG generation. It performs image vectorization by understanding image semantics and using SVG primitives for compact, precise outputs. Unlike traditional methods, StarVector works directly in the SVG code space, leveraging visual understanding to apply accurate SVG primitives. To train StarVector, we create SVG-Stack, a diverse dataset of 2M samples that enables generalization across vectorization tasks and precise use of primitives like ellipses, polygons, and text. We address challenges in SVG evaluation, showing that pixel-based metrics like MSE fail to capture the unique qualities of vector graphics. We introduce SVG-Bench, a benchmark across 10 datasets, and 3 tasks: Image-to-SVG, Text-to-SVG generation, and diagram generation. Using this setup, StarVector achieves state-of-the-art performance, producing more compact and semantically rich SVGs.