The Fair Game: Auditing & debiasing AI algorithms over time

Fiche du document

Date

4 juin 2025

Type de document
Périmètre
Langue
Identifiants
Relations

Ce document est lié à :
info:eu-repo/semantics/altIdentifier/doi/10.1017/cfl.2025.8

Collection

Archives ouvertes

Licences

http://creativecommons.org/licenses/by/ , info:eu-repo/semantics/OpenAccess


Mots-clés Und

-bio]/Ethics


Citer ce document

Debabrota Basu et al., « The Fair Game: Auditing & debiasing AI algorithms over time », HAL SHS (Sciences de l’Homme et de la Société), ID : 10.1017/cfl.2025.8


Métriques


Partage / Export

Résumé En

An emerging field of AI, namely Fair Machine Learning (ML), aims to quantify different types of bias (also known as unfairness) exhibited in the predictions of ML algorithms, and to design new algorithms to mitigate them. Often, the definitions of bias used in the literature are observational, i.e. they use the input and output of a pre-trained algorithm to quantify a bias under concern. In reality, these definitions are often conflicting in nature and can only be deployed if either the ground truth is known or only in retrospect after deploying the algorithm. Thus, there is a gap between what we want Fair ML to achieve and what it does in a dynamic social environment. Hence, we propose an alternative dynamic mechanism, “Fair Game”, to assure fairness in the predictions of an ML algorithm and to adapt its predictions as the society interacts with the algorithm over time. “Fair Game” puts together an Auditor and a Debiasing algorithm in a loop around an ML algorithm. The “Fair Game” puts these two components in a loop by leveraging Reinforcement Learning (RL). RL algorithms interact with an environment to take decisions, which yields new observations (also known as data/feedback) from the environment and in turn, adapts future decisions. RL is already used in algorithms with pre-fixed long-term fairness goals. “Fair Game” provides a unique framework where the fairness goals can be adapted over time by only modifying the auditor and the different biases it quantifies. Thus, “Fair Game” aims to simulate the evolution of ethical and legal frameworks in the society by creating an auditor which sends feedback to a debiasing algorithm deployed around an ML system. This allows us to develop a flexible and adaptive-over-time framework to build Fair ML systems pre- and post-deployment.

document thumbnail

Par les mêmes auteurs

Sur les mêmes sujets

Sur les mêmes disciplines