As AI systems, particularly generative models, increasingly
influence decision-making, ensuring that they are able to
fairly represent diver
… (see more)se human preferences becomes crucial.
This paper introduces a novel framework for evaluating
epistemic fairness in preference learning models inspired
by economic theories of inequality and Rawlsian justice. We
propose metrics adapted from the Gini Coefficient, Atkinson
Index, and Kuznets Ratio to quantify fairness in these
models. We validate our approach using a diverse collection
of datasets, covering both visual preferences and textual
content. Our analysis reveals variations in model
performance across users, highlighting potential epistemic
injustices. We explore pre-processing and in-processing
techniques to mitigate these inequalities, demonstrating a
complex relationship between model efficiency and fairness.
This work contributes to AI ethics by providing a framework
for evaluating and improving epistemic fairness in
preference learning models, offering insights for
developing more inclusive AI systems in contexts where
diverse human preferences are crucial.