Portrait of Emiliano Penaloza

Emiliano Penaloza

PhD - Université de Montréal
Supervisor
Research Topics
Deep Learning
Natural Language Processing
Recommender Systems

Publications

How to Train Your LLM Web Agent: A Statistical Diagnosis
Hadi Nekoei
Nicolas Gontier
Miguel Muñoz-Mármol
Stefania Raimondo
Alexandre Piché
Alexandre Lacoste
Massimo Caccia
Large language model (LLM) agents for web interfaces have advanced rapidly, yet open-source systems still lag behind proprietary agents. Bri… (see more)dging this gap is key to enabling customizable, efficient, and privacy-preserving agents. Two challenges hinder progress: the reproducibility issues in RL and LLM agent training, where results often depend on sensitive factors like seeds and decoding parameters, and the focus of prior work on single-step tasks, overlooking the complexities of web-based, multi-step decision-making. We address these gaps by providing a statistically driven study of training LLM agents for web tasks. Our two-stage pipeline combines imitation learning from a Llama 3.3 70B teacher with on-policy fine-tuning via Group Relative Policy Optimization (GRPO) on a Llama 3.1 8B student. Through 240 configuration sweeps and rigorous bootstrapping, we chart the first compute allocation curve for open-source LLM web agents. Our findings show that dedicating one-third of compute to teacher traces and the rest to RL improves MiniWoB++ success by 6 points and closes 60% of the gap to GPT-4o on WorkArena, while cutting GPU costs by 45%. We introduce a principled hyperparameter sensitivity analysis, offering actionable guidelines for robust and cost-effective agent training.
Addressing Concept Mislabeling in Concept Bottleneck Models Through Preference Optimization
Tianyue H. Zhang
Mateo Espinosa Zarlenga
PREFERENCE OPTIMIZATION FOR CONCEPT BOTTLENECK MODELS
Tianyue H. Zhang
Mateo Espinosa Zarlenga
Concept Bottleneck Models (CBMs) propose to enhance the trustworthiness of AI systems by constraining their decisions on a set of human-unde… (see more)rstandable concepts. However, CBMs typically assume that datasets contain accurate concept labels—an assumption often violated in practice, which we show can significantly degrade performance (by 25% in some cases). To address this, we introduce the Concept Preference Optimization (CPO) objective, a new loss function based on Direct Preference Optimization, which effectively mitigates the negative impact of concept mislabeling on CBM performance. We provide an analysis of some key properties of the CPO objective showing it directly optimizes for the concept’s posterior distribution, and contrast it against Binary Cross Entropy (BCE) where we show CPO is inherently less sensitive to concept noise. We empirically confirm our analysis finding that CPO consistently outperforms BCE in three real-world datasets with and without added label noise.
PREFERENCE OPTIMIZATION FOR CONCEPT BOTTLENECK MODELS
Tianyue H. Zhang
Mateo Espinosa Zarlenga
Concept Bottleneck Models (CBMs) propose to enhance the trustworthiness of AI systems by constraining their decisions on a set of human-unde… (see more)rstandable concepts. However, CBMs typically assume that datasets contain accurate concept labels—an assumption often violated in practice, which we show can significantly degrade performance (by 25% in some cases). To address this, we introduce the Concept Preference Optimization (CPO) objective, a new loss function based on Direct Preference Optimization, which effectively mitigates the negative impact of concept mislabeling on CBM performance. We provide an analysis of some key properties of the CPO objective showing it directly optimizes for the concept’s posterior distribution, and contrast it against Binary Cross Entropy (BCE) where we show CPO is inherently less sensitive to concept noise. We empirically confirm our analysis finding that CPO consistently outperforms BCE in three real-world datasets with and without added label noise.
TEARS: Text Representations for Scrutable Recommendations
Traditional recommender systems rely on high-dimensional (latent) embeddings for modeling user-item interactions, often resulting in opaque … (see more)representations that lack interpretability. Moreover, these systems offer limited control to users over their recommendations. Inspired by recent work, we introduce TExtuAl Representations for Scrutable recommendations (TEARS) to address these challenges. Instead of representing a user’s interests through latent embed- dings, TEARS encodes them in natural text, providing transparency and allowing users to edit them. To encode such preferences, we use modern LLMs to generate high-quality user summaries which we find uniquely capture user preferences. Using these summaries we take a hybrid approach where we use an optimal transport procedure to align the summaries’ representations with the repre- sentation of a standard VAE for collaborative filtering. We find this approach can surpass the performance of the three popular VAE models while providing user-controllable recommendations. We further analyze the controllability of TEARS through three simu- lated user tasks to evaluate the effectiveness of user edits on their summaries. Our code and all user-summaries can be seen in an anonymized repository.