Testing the Fairness-Accuracy Improvability of Algorithms

Fiche du document

Date

8 mai 2024

Type de document
Périmètre
Identifiant
  • 2405.04816
Collection

arXiv

Organisation

Cornell University



Sujets proches En

Algorism

Citer ce document

Eric Auerbach et al., « Testing the Fairness-Accuracy Improvability of Algorithms », arXiv - économie


Partage / Export

Résumé 0

Many organizations use algorithms that have a disparate impact, i.e., the benefits or harms of the algorithm fall disproportionately on certain social groups. Addressing an algorithm's disparate impact can be challenging, however, because it is often unclear whether it is possible to reduce this impact without sacrificing other objectives of the organization, such as accuracy or profit. Establishing the improvability of algorithms with respect to multiple criteria is of both conceptual and practical interest: in many settings, disparate impact that would otherwise be prohibited under US federal law is permissible if it is necessary to achieve a legitimate business interest. The question is how a policy-maker can formally substantiate, or refute, this "necessity" defense. In this paper, we provide an econometric framework for testing the hypothesis that it is possible to improve on the fairness of an algorithm without compromising on other pre-specified objectives. Our proposed test is simple to implement and can be applied under any exogenous constraint on the algorithm space. We establish the large-sample validity and consistency of our test, and illustrate its practical application by evaluating a healthcare algorithm originally considered by Obermeyer et al. (2019). In this application, we reject the null hypothesis that it is not possible to reduce the algorithm's disparate impact without compromising the accuracy of its predictions.

document thumbnail

Par les mêmes auteurs

Sur les mêmes sujets

Sur les mêmes disciplines

Exporter en