Algorithmic Fairness and Social Welfare

Fiche du document

Date

5 avril 2024

Type de document
Périmètre
Identifiant
  • 2404.04424
Collection

arXiv

Organisation

Cornell University



Sujets proches En

Impartiality

Citer ce document

Annie Liang et al., « Algorithmic Fairness and Social Welfare », arXiv - économie


Partage / Export

Résumé 0

Algorithms are increasingly used to guide high-stakes decisions about individuals. Consequently, substantial interest has developed around defining and measuring the ``fairness'' of these algorithms. These definitions of fair algorithms share two features: First, they prioritize the role of a pre-defined group identity (e.g., race or gender) by focusing on how the algorithm's impact differs systematically across groups. Second, they are statistical in nature; for example, comparing false positive rates, or assessing whether group identity is independent of the decision (where both are viewed as random variables). These notions are facially distinct from a social welfare approach to fairness, in particular one based on ``veil of ignorance'' thought experiments in which individuals choose how to structure society prior to the realization of their social identity. In this paper, we seek to understand and organize the relationship between these different approaches to fairness. Can the optimization criteria proposed in the algorithmic fairness literature also be motivated as the choices of someone from behind the veil of ignorance? If not, what properties distinguish either approach to fairness?

document thumbnail

Par les mêmes auteurs

Sur les mêmes sujets

Sur les mêmes disciplines

Exporter en