Portrait de Marco Pedersoli

Marco Pedersoli

Membre affilié
Professeur associé, École de technologie suprérieure
Sujets de recherche
Apprentissage de représentations
Apprentissage multimodal
Apprentissage profond
Généralisation
Imagerie satellite
Modèles génératifs
Robustesse
Supervision faible
Systèmes de gestion de l'énergie des bâtiments
Vision et langage
Vision par ordinateur

Biographie

Je suis professeur associé à l'ÉTS Montréal, membre du LIVIA (le Laboratoire d'Imagerie, Vision et Intelligence Artificielle), et membre du Laboratoire International des Systèmes d'Apprentissage (ILLS). Je suis également membre d'ELLIS, le réseau européen d'excellence en IA. Depuis 2021, je suis co-titulaire de la chaire de recherche industrielle Distech sur les réseaux neuronaux intégrés pour le contrôle des bâtiments connectés.

Mes recherches sont centrées sur les méthodes et algorithmes de Deep Learning, avec un accent sur la reconnaissance visuelle, l'interprétation automatique et la compréhension des images et des vidéos. L'un des principaux objectifs de mon travail est de faire progresser l'intelligence artificielle en minimisant deux facteurs critiques : la charge de calcul et la nécessité d'une supervision humaine. Ces réductions sont essentielles pour une IA évolutive, permettant des systèmes plus efficaces, adaptatifs et intégrés. Dans mes travaux récents, j'ai contribué au développement de réseaux neuronaux pour les bâtiments intelligents, en intégrant des solutions basées sur l'IA pour améliorer l'efficacité énergétique et le confort dans les environnements intelligents.

Étudiants actuels

Maîtrise recherche - École de technologie suprérieure
Superviseur⋅e principal⋅e :

Publications

Test-Time Adaptation via Cache Personalization for Facial Expression Recognition in Videos
Masoumeh Sharafi
Muhammad Zeeshan
Soufiane Belharbi
Alessandro L. Koerich
Eric Granger
Facial expression recognition (FER) in videos requires model personalization to capture the considerable variations across subjects. Vision-… (voir plus)language models (VLMs) offer strong transfer to downstream tasks through image-text alignment, but their performance can still degrade under inter-subject distribution shifts. Personalizing models using test-time adaptation (TTA) methods can mitigate this challenge. However, most state-of-the-art TTA methods rely on unsupervised parameter optimization, introducing computational overhead that is impractical in many real-world applications. This paper introduces TTA through Cache Personalization (TTA-CaP), a cache-based TTA method that enables cost-effective (gradient-free) personalization of VLMs for video FER. Prior cache-based TTA methods rely solely on dynamic memories that store test samples, which can accumulate errors and drift due to noisy pseudo-labels. TTA-CaP leverages three coordinated caches: a personalized source cache that stores source-domain prototypes, a positive target cache that accumulates reliable subject-specific samples, and a negative target cache that stores low-confidence cases as negative samples to reduce the impact of noisy pseudo-labels. Cache updates and replacement are controlled by a tri-gate mechanism based on temporal stability, confidence, and consistency with the personalized cache. Finally, TTA-CaP refines predictions through fusion of embeddings, yielding refined representations that support temporally stable video-level predictions. Our experiments on three challenging video FER datasets, BioVid, StressID, and BAH, indicate that TTA-CaP can outperform state-of-the-art TTA methods under subject-specific and environmental shifts, while maintaining low computational and memory overhead for real-world deployment.
Semantic Anchor Transport: Robust Test-Time Adaptation for Vision-Language Models
Shambhavi Mishra
Julio Silva-Rodríguez
Ismail Ben Ayed
Jose Dolz
Large pre-trained vision-language models (VLMs) like CLIP exhibit strong zero-shot performance but struggle under distributional shifts. We … (voir plus)propose Semantic Anchor Transport (SAT), a method that generates pseudo-labels for test samples by aligning visual embeddings with reliable text-based semantic anchors using Optimal Transport for batch-wise label assignment. These pseudo-labels enable efficient test-time adaptation through principled cross-modal alignment. We further incorporate multi-template distillation to leverage diverse textual clues, replicating multi-view contrastive learning without added computational cost. Extensive experiments demonstrate consistent performance gains over state-of-the-art methods across multiple benchmarks while maintaining computational efficiency.
VectorGym: A Multitask Benchmark for SVG Code Generation, Sketching, and Editing
Haotian Zhang
Tianyang Zhang
Rishav Pramanik
Meng Lin
Xiaoqing Xie
Marco Terral
Aly Shariff
Sai Rajeswar
Christopher Pal
We introduce VectorGym, a comprehensive benchmark suite for Scalable Vector Graphics (SVG) that spans generation from text and sketches, com… (voir plus)plex editing, and visual understanding. VectorGym addresses the lack of realistic, challenging benchmarks aligned with professional design workflows. Our benchmark comprises four tasks with expert human-authored annotations: the novel Sketch2SVG task (VG-Sketch); a new SVG editing dataset (VG-Edit) featuring complex, multi-step edits with higher-order primitives; Text2SVG generation (VG-Text); and SVG captioning (VG-Cap). Unlike prior benchmarks that rely on synthetic edits, VectorGym provides gold-standard human annotations that require semantic understanding and design intent. We also propose a multi-task reinforcement learning approach that jointly optimizes across all four tasks using rendering-based rewards. Our method, built on GRPO with curriculum learning, trains a Qwen3-VL 8B model that achieves state-of-the-art performance among open-source models, surpassing much larger models including Qwen3-VL 235B and matching GPT-4o. We also introduce a VLM-as-a-Judge metric for SVG generation, validated through human correlation studies. Our evaluation of frontier VLMs reveals significant performance gaps, positioning VectorGym as a rigorous framework for advancing visual code generation. VectorGym is publicly available on huggingface.co/datasets/ServiceNow/VectorGym.
Leveraging Diversity for Privileged Multi-Teacher Knowledge Distillation for Facial Expression Recognition
Muhammad Haseeb Aslam
Alessandro L. Koerich
Eric Granger
Iterative Monte Carlo Tree Search for Neural Architecture Search
Mehraveh Javan
Matthew Toews
LT-Soups: Bridging Head and Tail Classes via Subsampled Model Soups
Masih Aminbeidokhti
Subhankar Roy
Eric Granger
Elisa Ricci
Real-world datasets typically exhibit long-tailed (LT) distributions, where a few head classes dominate and many tail classes are severely u… (voir plus)nderrepresented. While recent work shows that parameter-efficient fine-tuning (PEFT) methods like LoRA and AdaptFormer preserve tail-class performance on foundation models such as CLIP, we find that they do so at the cost of head-class accuracy. We identify the head-tail ratio, the proportion of head to tail classes, as a crucial but overlooked factor influencing this trade-off. Through controlled experiments on CIFAR100 with varying imbalance ratio (
High-Rate Mixout: Revisiting Mixout for Robust Domain Generalization
Masih Aminbeidokhti
Heitor Rapela Medeiros
Srikanth Muralidharan
Eric Granger
Revisiting Mixout: An Overlooked Path to Robust Finetuning
Masih Aminbeidokhti
Heitor Rapela Medeiros
Eric Granger
Finetuning vision foundation models often improves in-domain accuracy but comes at the cost of robustness under distribution shift. We revis… (voir plus)it Mixout, a stochastic regularizer that intermittently replaces finetuned weights with their pretrained reference, through the lens of a single-run, weight-sharing implicit ensemble. This perspective reveals three key levers that govern robustness: the \emph{masking anchor}, \emph{resampling frequency}, and \emph{mask sparsity}. Guided by this analysis, we introduce GMixout, which (i) replaces the fixed anchor with an exponential moving-average snapshot that adapts during training, and (ii) regulates masking period via an explicit resampling-frequency hyperparameter. Our sparse-kernel implementation updates only a small fraction of parameters with no inference-time overhead, enabling training on consumer-grade GPUs. Experiments on benchmarks covering covariate shift, corruption, and class imbalance, ImageNet / ImageNet-LT, DomainNet, iWildCam, and CIFAR100-C, GMixout consistently improves in-domain accuracy beyond zero-shot performance while surpassing both Model Soups and strong parameter-efficient finetuning baselines under distribution shift.
VLOD-TTA: Test-Time Adaptation of Vision-Language Object Detectors
Atif Belal
Heitor Rapela Medeiros
Eric Granger
AInstein: Can AI Rediscover Scientific Concepts from First Principles?
Shambhavi Mishra
Jose Dolz
Christopher Pal
Large language models have demonstrated remarkable capabilities across diverse tasks, yet a fundamental question remains: can these models g… (voir plus)enuinely rediscover complex scientific insights, or do they merely recite memorized information? We present AInstein, a novel framework for evaluating whether language models can derive established scientific concepts from first principles when stripped of domain-specific terminology. Rather than testing the recall of scientific facts, we reformulate landmark discoveries as conceptual puzzles, challenging models to reconstruct the underlying technical solutions independently.
Rendering-Aware Reinforcement Learning for Vector Graphics Generation
Juan A. Rodriguez
Haotian Zhang
Rishav Pramanik
Pascal Wichmann
Arnab Mondal
Mohammad Reza Samsami
Sai Rajeswar
Christopher Pal
Scalable Vector Graphics (SVG) offer a powerful format for representing visual designs as interpretable code. Recent advances in vision-lang… (voir plus)uage models (VLMs) have enabled high-quality SVG generation by framing the problem as a code generation task and leveraging large-scale pretraining. VLMs are particularly suitable for this task as they capture both global semantics and fine-grained visual patterns, while transferring knowledge across vision, natural language, and code domains. However, existing VLM approaches often struggle to produce faithful and efficient SVGs because they never observe the rendered images during training. Although differentiable rendering for autoregressive SVG code generation remains unavailable, rendered outputs can still be compared to original inputs, enabling evaluative feedback suitable for reinforcement learning (RL). We introduce RLRF (Reinforcement Learning from Rendering Feedback), an RL method that enhances SVG generation in autoregressive VLMs by leveraging feedback from rendered SVG outputs. Given an input image, the model generates SVG roll-outs that are rendered and compared to the original image to compute a reward. This visual fidelity feedback guides the model toward producing more accurate, efficient, and semantically coherent SVGs. RLRF significantly outperforms supervised fine-tuning, addressing common failure modes and enabling precise, high-quality SVG generation with strong structural understanding and generalization.
Infrared Object Detection with Ultra Small ConvNets: Is ImageNet Pretraining Still Useful?
Srikanth Muralidharan
Heitor Rapela Medeiros
Masih Aminbeidokhti
Eric Granger
Many real-world applications require recognition models that are robust to different operational conditions and modalities, but at the same … (voir plus)time run on small embedded devices, with limited hardware. While for normal size models, pre-training is known to be very beneficial in accuracy and robustness, for small models, that can be employed for embedded and edge devices, its effect is not clear. In this work, we investigate the effect of ImageNet pretraining on increasingly small backbone architectures (ultra-small models, with