Portrait de Marco Pedersoli

Marco Pedersoli

Membre affilié
Professeur associé, École de technologie suprérieure
Sujets de recherche
Apprentissage de représentations
Apprentissage multimodal
Apprentissage profond
Généralisation
Imagerie satellite
Modèles génératifs
Robustesse
Supervision faible
Systèmes de gestion de l'énergie des bâtiments
Vision et langage
Vision par ordinateur

Biographie

Je suis professeur associé à l'ÉTS Montréal, membre du LIVIA (le Laboratoire d'Imagerie, Vision et Intelligence Artificielle), et membre du Laboratoire International des Systèmes d'Apprentissage (ILLS). Je suis également membre d'ELLIS, le réseau européen d'excellence en IA. Depuis 2021, je suis co-titulaire de la chaire de recherche industrielle Distech sur les réseaux neuronaux intégrés pour le contrôle des bâtiments connectés.

Mes recherches sont centrées sur les méthodes et algorithmes de Deep Learning, avec un accent sur la reconnaissance visuelle, l'interprétation automatique et la compréhension des images et des vidéos. L'un des principaux objectifs de mon travail est de faire progresser l'intelligence artificielle en minimisant deux facteurs critiques : la charge de calcul et la nécessité d'une supervision humaine. Ces réductions sont essentielles pour une IA évolutive, permettant des systèmes plus efficaces, adaptatifs et intégrés. Dans mes travaux récents, j'ai contribué au développement de réseaux neuronaux pour les bâtiments intelligents, en intégrant des solutions basées sur l'IA pour améliorer l'efficacité énergétique et le confort dans les environnements intelligents.

Publications

VLOD-TTA: Test-Time Adaptation of Vision-Language Object Detectors
Atif Belal
Heitor Rapela Medeiros
Eric Granger
Vision-language object detectors (VLODs) such as YOLO-World and Grounding DINO achieve impressive zero-shot recognition by aligning region p… (voir plus)roposals with text representations. However, their performance often degrades under domain shift. We introduce VLOD-TTA, a test-time adaptation (TTA) framework for VLODs that leverages dense proposal overlap and image-conditioned prompt scores. First, an IoU-weighted entropy objective is proposed that concentrates adaptation on spatially coherent proposal clusters and reduces confirmation bias from isolated boxes. Second, image-conditioned prompt selection is introduced, which ranks prompts by image-level compatibility and fuses the most informative prompts with the detector logits. Our benchmarking across diverse distribution shifts -- including stylized domains, driving scenes, low-light conditions, and common corruptions -- shows the effectiveness of our method on two state-of-the-art VLODs, YOLO-World and Grounding DINO, with consistent improvements over the zero-shot and TTA baselines. Code : https://github.com/imatif17/VLOD-TTA
AInstein: Can AI Rediscover Scientific Concepts from First Principles?
Large language models have demonstrated remarkable capabilities across diverse tasks, yet a fundamental question remains: can these models g… (voir plus)enuinely rediscover complex scientific insights, or do they merely recite memorized information? We present AInstein, a novel framework for evaluating whether language models can derive established scientific concepts from first principles when stripped of domain-specific terminology. Rather than testing the recall of scientific facts, we reformulate landmark discoveries as conceptual puzzles, challenging models to reconstruct the underlying technical solutions independently.
MuSACo: Multimodal Subject-Specific Selection and Adaptation for Expression Recognition with Co-Training
Muhammad Osama Zeeshan
Natacha Gillet
Alessandro Lameiras Koerich
Francois Bremond
Eric Granger
Personalized expression recognition (ER) involves adapting a machine learning model to subject-specific data for improved recognition of exp… (voir plus)ressions with considerable interpersonal variability. Subject-specific ER can benefit significantly from multi-source domain adaptation (MSDA) methods, where each domain corresponds to a specific subject, to improve model accuracy and robustness. Despite promising results, state-of-the-art MSDA approaches often overlook multimodal information or blend sources into a single domain, limiting subject diversity and failing to explicitly capture unique subject-specific characteristics. To address these limitations, we introduce MuSACo, a multi-modal subject-specific selection and adaptation method for ER based on co-training. It leverages complementary information across multiple modalities and multiple source domains for subject-specific adaptation. This makes MuSACo particularly relevant for affective computing applications in digital health, such as patient-specific assessment for stress or pain, where subject-level nuances are crucial. MuSACo selects source subjects relevant to the target and generates pseudo-labels using the dominant modality for class-aware learning, in conjunction with a class-agnostic loss to learn from less confident target samples. Finally, source features from each modality are aligned, while only confident target features are combined. Our experimental results on challenging multimodal ER datasets: BioVid and StressID, show that MuSACo can outperform UDA (blending) and state-of-the-art MSDA methods.
Low-Rank Expert Merging for Multi-Source Domain Adaptation in Person Re-Identification
Taha Mustapha Nehdi
Nairouz Mrabah
Atif Belal
Eric Granger
Adapting person re-identification (reID) models to new target environments remains a challenging problem that is typically addressed using u… (voir plus)nsupervised domain adaptation (UDA) methods. Recent works show that when labeled data originates from several distinct sources (e.g., datasets and cameras), considering each source separately and applying multi-source domain adaptation (MSDA) typically yields higher accuracy and robustness compared to blending the sources and performing conventional UDA. However, state-of-the-art MSDA methods learn domain-specific backbone models or require access to source domain data during adaptation, resulting in significant growth in training parameters and computational cost. In this paper, a Source-free Adaptive Gated Experts (SAGE-reID) method is introduced for person reID. Our SAGE-reID is a cost-effective, source-free MSDA method that first trains individual source-specific low-rank adapters (LoRA) through source-free UDA. Next, a lightweight gating network is introduced and trained to dynamically assign optimal merging weights for fusion of LoRA experts, enabling effective cross-domain knowledge transfer. While the number of backbone parameters remains constant across source domains, LoRA experts scale linearly but remain negligible in size (= 2% of the backbone), reducing both the memory consumption and risk of overfitting. Extensive experiments conducted on three challenging b
Infrared Object Detection with Ultra Small ConvNets: Is ImageNet Pretraining Still Useful?
Srikanth Muralidharan
Heitor Rapela Medeiros
Masih Aminbeidokhti
Eric Granger
Many real-world applications require recognition models that are robust to different operational conditions and modalities, but at the same … (voir plus)time run on small embedded devices, with limited hardware. While for normal size models, pre-training is known to be very beneficial in accuracy and robustness, for small models, that can be employed for embedded and edge devices, its effect is not clear. In this work, we investigate the effect of ImageNet pretraining on increasingly small backbone architectures (ultra-small models, with
Infrared Object Detection with Ultra Small ConvNets: Is ImageNet Pretraining Still Useful?
Srikanth Muralidharan
Heitor Rapela Medeiros
Masih Aminbeidokhti
Eric Granger
Many real-world applications require recognition models that are robust to different operational conditions and modalities, but at the same … (voir plus)time run on small embedded devices, with limited hardware. While for normal size models, pre-training is known to be very beneficial in accuracy and robustness, for small models, that can be employed for embedded and edge devices, its effect is not clear. In this work, we investigate the effect of ImageNet pretraining on increasingly small backbone architectures (ultra-small models, with
Low-Rank Expert Merging for Multi-Source Domain Adaptation in Person Re-Identification
Taha Mustapha Nehdi
Nairouz Mrabah
Atif Belal
Eric Granger
MuSACo: Multimodal Subject-Specific Selection and Adaptation for Expression Recognition with Co-Training
Muhammad Osama Zeeshan
Natacha Gillet
Alessandro Lameiras Koerich
Francois Bremond
Eric Granger
Personalized Feature Translation for Expression Recognition: An Efficient Source-Free Domain Adaptation Method
Masoumeh Sharafi
Soufiane Belharbi
Houssem Ben Salem
Ali Etemad
Alessandro Lameiras Koerich
Simon Bacon
Eric Granger
Facial expression recognition (FER) models are employed in many video-based affective computing applications, such as human-computer interac… (voir plus)tion and healthcare monitoring. However, deep FER models often struggle with subtle expressions and high inter-subject variability, limiting their performance in real-world applications. To improve their performance, source-free domain adaptation (SFDA) methods have been proposed to personalize a pretrained source model using only unlabeled target domain data, thereby avoiding data privacy, storage, and transmission constraints. This paper addresses a challenging scenario where source data is unavailable for adaptation, and only unlabeled target data consisting solely of neutral expressions is available. SFDA methods are not typically designed to adapt using target data from only a single class. Further, using models to generate facial images with non-neutral expressions can be unstable and computationally intensive. In this paper, personalized feature translation (PFT) is proposed for SFDA. Unlike current image translation methods for SFDA, our lightweight method operates in the latent space. We first pre-train the translator on the source domain data to transform the subject-specific style features from one source subject into another. Expression information is preserved by optimizing a combination of expression consistency and style-aware objectives. Then, the translator is adapted on neutral target data, without using source data or image synthesis. By translating in the latent space, PFT avoids the complexity and noise of face expression generation, producing discriminative embeddings optimized for classification. Using PFT eliminates the need for image synthesis, reduces computational overhead (using a lightweight translator), and only adapts part of the model, making the method efficient compared to image-based translation.
WiSE-OD: Benchmarking Robustness in Infrared Object Detection
Heitor Rapela Medeiros
Atif Belal
Masih Aminbeidokhti
Eric Granger
Advancements in Affective and Behavior Analysis: The 8th ABAW Workshop and Competition
Dimitrios Kollias
Panagiotis Tzirakis
Alan Cowen
Stefanos Zafeiriou
Irene Kotsia
Eric Granger
Simon Bacon
Alice Baird
Chris Gagne
Chunchang Shao
Guanyu Hu
Soufiane Belharbi
Muhammad Haseeb Aslam
The 8th Affective & Behavior Analysis in-the-Wild (ABAW) Workshop at CVPR 2025 focuses on advancing the understanding and modeling of human … (voir plus)affective and behavioral patterns in real-world scenarios. It serves as a platform for interdisciplinary collaboration, showcasing the latest methodologies and applications in affective computing and behavior analysis. A core feature of the workshop is the ABAW Competition, which tackles critical challenges in human affect and behavior recognition essential for developing human-centered AI technologies. The 8th ABAW Competition features six challenges: (1) estimation of two continuous affect dimensions (valence and arousal), (2) recognition of eight mutually exclusive classes (the 7 basic expressions and a category 'other'), (3) detection of twelve action units, (4) recognition of seven mutually exclusive compound expressions, (5) estimation of emotional mimicry intensity across six dimensions, and (6) recognition of presence and absence of ambivalence/hesitancy. These challenges leverage datasets such as Aff-Wild2, C-EXPR-DB, HUME-Vidmimic2, and BAH, providing a comprehensive benchmark for evaluating affective behavior analysis models. Each challenge is assessed using specialized performance metrics, including Concordance Correlation Coefficient, F1-score, and Pearson's correlation. This paper provides an overview of the competition, detailing the datasets, pre-processing methodologies, evaluation criteria, baseline models and top performing teams' in each Challenge, including their obtained performance. Further details on the competition are available at: https://affective-behavior-analysis-inthe-wild.github.io/8th.
Rendering-Aware Reinforcement Learning for Vector Graphics Generation
Juan A. Rodriguez
Haotian Zhang
Abhay Puri
Rishav Pramanik
Pascal Wichmann
Arnab Mondal
Mohammad Reza Samsami
Sai Rajeswar
David Vazquez
Scalable Vector Graphics (SVG) offer a powerful format for representing visual designs as interpretable code. Recent advances in vision-lang… (voir plus)uage models (VLMs) have enabled high-quality SVG generation by framing the problem as a code generation task and leveraging large-scale pretraining. VLMs are particularly suitable for this task as they capture both global semantics and fine-grained visual patterns, while transferring knowledge across vision, natural language, and code domains. However, existing VLM approaches often struggle to produce faithful and efficient SVGs because they never observe the rendered images during training. Although differentiable rendering for autoregressive SVG code generation remains unavailable, rendered outputs can still be compared to original inputs, enabling evaluative feedback suitable for reinforcement learning (RL). We introduce RLRF(Reinforcement Learning from Rendering Feedback), an RL method that enhances SVG generation in autoregressive VLMs by leveraging feedback from rendered SVG outputs. Given an input image, the model generates SVG roll-outs that are rendered and compared to the original image to compute a reward. This visual fidelity feedback guides the model toward producing more accurate, efficient, and semantically coherent SVGs. RLRF significantly outperforms supervised fine-tuning, addressing common failure modes and enabling precise, high-quality SVG generation with strong structural understanding and generalization.