Portrait of Fernando Diaz is unavailable

Fernando Diaz

Affiliate Member
Associate Professor, Carnegie Mellon University, School of Computer Science, Language Technologies Institutes
Adjunct Professor, McGill University, School of Computer Science
Research Scientist, Google Pittsburgh
Research Topics
Information Retrieval
Recommender Systems

Biography

Fernando Diaz is an associate professor at Carnegie Mellon University's School of Computer Science, a research scientist at Google Pittsburgh, and an adjunct professor in McGill University’s School of Computer Science.

Diaz’s expertise lies in the formal study of the search for small fragments of information in large data sets. His interests include distributed approaches to web-based documentary research, interactive and faceted research, the exploration of temporal models from news and queries, multilingual information research, and graph-based methods.

Diaz’s primary research interest is information retrieval, i.e., the formal study of searching large collections of data for small bits of information. The most familiar form of information retrieval is the web search, where users search a collection of webpages for one or a few relevant webpages. However, information retrieval goes far beyond web searches to include processes like cross-lingual retrieval, personalization, desktop search and interactive retrieval.

Diaz’s research experience includes distributed information retrieval approaches to web searching, interactive and faceted retrieval, mining of temporal patterns from news and query logs, cross-lingual information retrieval, graph-based retrieval methods, and exploiting information from multiple corpora.

For his PhD research, Diaz studied the relationship between document clustering and document scoring for retrieval using methods from machine learning and statistics. As a result, he developed an algorithm for system self-assessment and self-tuning that significantly improves the performance of retrieval algorithms across a variety of corpora.

Current Students

PhD - McGill University
Principal supervisor :

Publications

A Survey of Diversification Techniques in Search and Recommendation
Haolun Wu
Yansen Zhang
Chen Ma
Fuyuan Lyu
Bowei He
Bhaskar Mitra
Diversifying search results is an important research topic in retrieval systems in order to satisfy both the various interests of customers … (see more)and the equal market exposure of providers. There has been a growing attention on diversity-aware research during recent years, accompanied by a proliferation of literature on methods to promote diversity in search and recommendation. However, the diversity-aware studies in retrieval systems lack a systematic organization and are rather fragmented. In this survey, we are the first to propose a unified taxonomy for classifying the metrics and approaches of diversification in both search and recommendation, which are two of the most extensively researched fields of retrieval systems. We begin the survey with a brief discussion of why diversity is important in retrieval systems, followed by a summary of the various diversity concerns in search and recommendation, highlighting their relationship and differences. For the survey’s main body, we present a unified taxonomy of diversification metrics and approaches in retrieval systems, from both the search and recommendation perspectives. In the later part of the survey, we discuss the openness research questions of diversity-aware research in search and recommendation in an effort to inspire future innovations and encourage the implementation of diversity in real-world systems.
Density-based User Representation using Gaussian Process Regression for Multi-interest Personalized Retrieval
Haolun Wu
Ofer Meshi
Masrour Zoghi
Craig Boutilier
MARYAM KARIMZADEHGAN
Group Membership Bias
Ali Vardasbi
Maarten de Rijke
Mostafa Dehghani
Global AI Cultures
Rida Qadri
Arjun Subramonian
Sunipa Dev
Georgina Emma Born
Mary L. Gray
Jessica Quaye
Rachel Bergmann
Fairness Through Domain Awareness: Mitigating Popularity Bias For Music Discovery
Rebecca Salganik
As online music platforms grow, music recommender systems play a vital role in helping users navigate and discover content within their vast… (see more) musical databases. At odds with this larger goal, is the presence of popularity bias, which causes algorithmic systems to favor mainstream content over, potentially more relevant, but niche items. In this work we explore the intrinsic relationship between music discovery and popularity bias. To mitigate this issue we propose a domain-aware, individual fairness-based approach which addresses popularity bias in graph neural network (GNNs) based recommender systems. Our approach uses individual fairness to reflect a ground truth listening experience, i.e., if two songs sound similar, this similarity should be reflected in their representations. In doing so, we facilitate meaningful music discovery that is robust to popularity bias and grounded in the music domain. We apply our BOOST methodology to two discovery based tasks, performing recommendations at both the playlist level and user level. Then, we ground our evaluation in the cold start setting, showing that our approach outperforms existing fairness benchmarks in both performance and recommendation of lesser-known content. Finally, our analysis explains why our proposed methodology is a novel and promising approach to mitigating popularity bias and improving the discovery of new and niche content in music recommender systems.
Scaling Laws Do Not Scale
Michael Madaio
Recent work has proposed a power law relationship, referred to as ``scaling laws,'' between the performance of artificial intelligence (AI) … (see more)models and aspects of those models' design (e.g., dataset size). In other words, as the size of a dataset (or model parameters, etc) increases, the performance of a given model trained on that dataset will correspondingly increase. However, while compelling in the aggregate, this scaling law relationship overlooks the ways that metrics used to measure performance may be precarious and contested, or may not correspond with how different groups of people may perceive the quality of models' output. In this paper, we argue that as the size of datasets used to train large AI models grows, the number of distinct communities (including demographic groups) whose data is included in a given dataset is likely to grow, each of whom may have different values. As a result, there is an increased risk that communities represented in a dataset may have values or preferences not captured by (or in the worst case, at odds with) the metrics used to evaluate model performance for scaling laws. We end the paper with implications for AI scaling laws -- that models may not, in fact, continue to improve as the datasets get larger -- at least not for all people or communities impacted by those models.
Best-Case Retrieval Evaluation: Improving the Sensitivity of Reciprocal Rank with Lexicographic Precision
Across a variety of ranking tasks, researchers use reciprocal rank to measure the effectiveness for users interested in exactly one relevant… (see more) item. Despite its widespread use, evidence suggests that reciprocal rank is brittle when discriminating between systems. This brittleness, in turn, is compounded in modern evaluation settings where current, high-precision systems may be difficult to distinguish. We address the lack of sensitivity of reciprocal rank by introducing and connecting it to the concept of best-case retrieval, an evaluation method focusing on assessing the quality of a ranking for the most satisfied possible user across possible recall requirements. This perspective allows us to generalize reciprocal rank and define a new preference-based evaluation we call lexicographic precision or lexiprecision. By mathematical construction, we ensure that lexiprecision preserves differences detected by reciprocal rank, while empirically improving sensitivity and robustness across a broad set of retrieval and recommendation tasks.
Commonality in Recommender Systems: Evaluating Recommender Systems to Enhance Cultural Citizenship
Andres Ferraro
Gustavo Ferreira
Georgina Born
Recall, Robustness, and Lexicographic Evaluation
Bhaskar Mitra
Preference-Based Offline Evaluation
C. Clarke
Negar Arabzadeh
A core step in production model research and development involves the offline evaluation of a system before production deployment. Tradition… (see more)al offline evaluation of search, recommender, and other systems involves gathering item relevance labels from human editors. These labels can then be used to assess system performance using offline evaluation metrics. Unfortunately, this approach does not work when evaluating highly effective ranking systems, such as those emerging from the advances in machine learning. Recent work demonstrates that moving away from pointwise item and metric evaluation can be a more effective approach to the offline evaluation of systems. This tutorial, intended for both researchers and practitioners, reviews early work in preference-based evaluation and covers recent developments in detail.
Recall as a Measure of Ranking Robustness
Bhaskar Mitra
A Survey of Diversification Metrics and Approaches in Retrieval Systems: From the Perspective of Search and Recommendation
Haolun Wu
Yansen Zhang
Chen Ma
Fuyuan Lyu
Diversifying search results is an important research topic in retrieval systems in order to satisfy both the various interests of customers … (see more)and the equal market exposure of providers. There has been a growing attention on diversity-aware research during recent years, accompanied by a proliferation of literature on methods to promote diversity in search and recommendation. However, the diversity-aware studies in retrieval systems lack a systematic organization and are rather fragmented. In this survey, we are the first to propose a unified taxonomy for classifying the metrics and approaches of diversification in both search and recommendation, which are two of the most extensively researched fields of retrieval systems. We begin the survey with a brief discussion of why diversity is important in retrieval systems