Portrait de Fernando Diaz n'est pas disponible

Fernando Diaz

Membre affilié
Professeur agrégé, Carnegie Mellon University, École d'informatique, Language Technologies Institutes
Professeur associé, McGill University, École d'informatique
Chercheur scientifique, Google Pittsburgh
Sujets de recherche
Recherche d'information
Systèmes de recommandation

Biographie

Fernando Diaz est professeur agrégé à l'École d'informatique de l’Université Carnegie Mellon. Il est aussi chercheur scientifique à Google (Pittsburgh) ainsi que membre agrégé de l'École d'informatique de l'Université McGill.

Son principal intérêt de recherche est l’extraction d'information, c'est-à-dire l'étude formelle de la recherche de petits fragments d'information dans de grandes collections de données. L'exemple le plus familier d’extraction d'information est la recherche sur le Web, où les utilisateur·rice·s recherchent à travers une collection de pages Web une ou quelques pages pertinentes. Cependant, la recherche d'information va bien au-delà, et comprend par exemple la recherche interlingue, la personnalisation, la recherche sur le bureau et la recherche interactive. Au fil de ses travaux, Fernando Diaz a exploré les approches distribuées de recherche d'information sur le Web, la recherche interactive et à facettes, les modèles temporels à partir de nouvelles et de requêtes, la recherche d'information multilingue, les méthodes de recherche basées sur des graphiques et l'exploitation d'information à partir de multiples corpus.

Dans sa thèse, il a étudié la relation entre le regroupement de documents et la notation des documents en vue de leur extraction à l'aide de méthodes d'apprentissage automatique et de statistiques. Il a donc mis au point un algorithme d'autoévaluation et d'auto-ajustement du système qui améliore considérablement la performance des algorithmes de récupération dans une variété de corpus.

Étudiants actuels

Doctorat - McGill
Superviseur⋅e principal⋅e :

Publications

A Survey of Diversification Techniques in Search and Recommendation
Haolun Wu
Yansen Zhang
Chen Ma
Fuyuan Lyu
Bowei He
Bhaskar Mitra
Diversifying search results is an important research topic in retrieval systems in order to satisfy both the various interests of customers … (voir plus)and the equal market exposure of providers. There has been a growing attention on diversity-aware research during recent years, accompanied by a proliferation of literature on methods to promote diversity in search and recommendation. However, the diversity-aware studies in retrieval systems lack a systematic organization and are rather fragmented. In this survey, we are the first to propose a unified taxonomy for classifying the metrics and approaches of diversification in both search and recommendation, which are two of the most extensively researched fields of retrieval systems. We begin the survey with a brief discussion of why diversity is important in retrieval systems, followed by a summary of the various diversity concerns in search and recommendation, highlighting their relationship and differences. For the survey’s main body, we present a unified taxonomy of diversification metrics and approaches in retrieval systems, from both the search and recommendation perspectives. In the later part of the survey, we discuss the openness research questions of diversity-aware research in search and recommendation in an effort to inspire future innovations and encourage the implementation of diversity in real-world systems.
Density-based User Representation using Gaussian Process Regression for Multi-interest Personalized Retrieval
Haolun Wu
Ofer Meshi
Masrour Zoghi
Craig Boutilier
MARYAM KARIMZADEHGAN
Group Membership Bias
Ali Vardasbi
Maarten de Rijke
Mostafa Dehghani
Global AI Cultures
Rida Qadri
Arjun Subramonian
Sunipa Dev
Georgina Emma Born
Mary L. Gray
Jessica Quaye
Rachel Bergmann
Fairness Through Domain Awareness: Mitigating Popularity Bias For Music Discovery
Rebecca Salganik
As online music platforms grow, music recommender systems play a vital role in helping users navigate and discover content within their vast… (voir plus) musical databases. At odds with this larger goal, is the presence of popularity bias, which causes algorithmic systems to favor mainstream content over, potentially more relevant, but niche items. In this work we explore the intrinsic relationship between music discovery and popularity bias. To mitigate this issue we propose a domain-aware, individual fairness-based approach which addresses popularity bias in graph neural network (GNNs) based recommender systems. Our approach uses individual fairness to reflect a ground truth listening experience, i.e., if two songs sound similar, this similarity should be reflected in their representations. In doing so, we facilitate meaningful music discovery that is robust to popularity bias and grounded in the music domain. We apply our BOOST methodology to two discovery based tasks, performing recommendations at both the playlist level and user level. Then, we ground our evaluation in the cold start setting, showing that our approach outperforms existing fairness benchmarks in both performance and recommendation of lesser-known content. Finally, our analysis explains why our proposed methodology is a novel and promising approach to mitigating popularity bias and improving the discovery of new and niche content in music recommender systems.
Scaling Laws Do Not Scale
Michael Madaio
Recent work has proposed a power law relationship, referred to as ``scaling laws,'' between the performance of artificial intelligence (AI) … (voir plus)models and aspects of those models' design (e.g., dataset size). In other words, as the size of a dataset (or model parameters, etc) increases, the performance of a given model trained on that dataset will correspondingly increase. However, while compelling in the aggregate, this scaling law relationship overlooks the ways that metrics used to measure performance may be precarious and contested, or may not correspond with how different groups of people may perceive the quality of models' output. In this paper, we argue that as the size of datasets used to train large AI models grows, the number of distinct communities (including demographic groups) whose data is included in a given dataset is likely to grow, each of whom may have different values. As a result, there is an increased risk that communities represented in a dataset may have values or preferences not captured by (or in the worst case, at odds with) the metrics used to evaluate model performance for scaling laws. We end the paper with implications for AI scaling laws -- that models may not, in fact, continue to improve as the datasets get larger -- at least not for all people or communities impacted by those models.
Best-Case Retrieval Evaluation: Improving the Sensitivity of Reciprocal Rank with Lexicographic Precision
Across a variety of ranking tasks, researchers use reciprocal rank to measure the effectiveness for users interested in exactly one relevant… (voir plus) item. Despite its widespread use, evidence suggests that reciprocal rank is brittle when discriminating between systems. This brittleness, in turn, is compounded in modern evaluation settings where current, high-precision systems may be difficult to distinguish. We address the lack of sensitivity of reciprocal rank by introducing and connecting it to the concept of best-case retrieval, an evaluation method focusing on assessing the quality of a ranking for the most satisfied possible user across possible recall requirements. This perspective allows us to generalize reciprocal rank and define a new preference-based evaluation we call lexicographic precision or lexiprecision. By mathematical construction, we ensure that lexiprecision preserves differences detected by reciprocal rank, while empirically improving sensitivity and robustness across a broad set of retrieval and recommendation tasks.
Commonality in Recommender Systems: Evaluating Recommender Systems to Enhance Cultural Citizenship
Andres Ferraro
Gustavo Ferreira
Georgina Born
Recall, Robustness, and Lexicographic Evaluation
Bhaskar Mitra
Preference-Based Offline Evaluation
C. Clarke
Negar Arabzadeh
A core step in production model research and development involves the offline evaluation of a system before production deployment. Tradition… (voir plus)al offline evaluation of search, recommender, and other systems involves gathering item relevance labels from human editors. These labels can then be used to assess system performance using offline evaluation metrics. Unfortunately, this approach does not work when evaluating highly effective ranking systems, such as those emerging from the advances in machine learning. Recent work demonstrates that moving away from pointwise item and metric evaluation can be a more effective approach to the offline evaluation of systems. This tutorial, intended for both researchers and practitioners, reviews early work in preference-based evaluation and covers recent developments in detail.
Recall as a Measure of Ranking Robustness
Bhaskar Mitra
A Survey of Diversification Metrics and Approaches in Retrieval Systems: From the Perspective of Search and Recommendation
Haolun Wu
Yansen Zhang
Chen Ma
Fuyuan Lyu
Diversifying search results is an important research topic in retrieval systems in order to satisfy both the various interests of customers … (voir plus)and the equal market exposure of providers. There has been a growing attention on diversity-aware research during recent years, accompanied by a proliferation of literature on methods to promote diversity in search and recommendation. However, the diversity-aware studies in retrieval systems lack a systematic organization and are rather fragmented. In this survey, we are the first to propose a unified taxonomy for classifying the metrics and approaches of diversification in both search and recommendation, which are two of the most extensively researched fields of retrieval systems. We begin the survey with a brief discussion of why diversity is important in retrieval systems