How to Evaluate Trust in AI-Assisted Decision Making? A Survey of Empirical Methodologies Comment évaluer la confiance dans la prise de décision assistée par l'IA ? Une enquête sur les méthodologies empiriques En Fr

Fiche du document

Date

23 octobre 2021

Type de document
Périmètre
Langue
Identifiants
Relations

Ce document est lié à :
info:eu-repo/semantics/altIdentifier/doi/10.1145/3476068

Collection

Archives ouvertes

Licence

info:eu-repo/semantics/OpenAccess



Sujets proches En

Trust (Psychology)

Citer ce document

Oleksandra Vereschak et al., « Comment évaluer la confiance dans la prise de décision assistée par l'IA ? Une enquête sur les méthodologies empiriques », HAL SHS (Sciences de l’Homme et de la Société), ID : 10.1145/3476068


Métriques


Partage / Export

Résumé En

The spread of AI-embedded systems involved in human decision making makes studying human trust in these systems critical. However, empirically investigating trust is challenging. One reason is the lack of standard protocols to design trust experiments. In this paper, we present a survey of existing methods to empirically investigate trust in AI-assisted decision making and analyse the corpus along the constitutive elements of an experimental protocol. We find that the definition of trust is not commonly integrated in experimental protocols, which can lead to findings that are overclaimed or are hard to interpret and compare across studies. Drawing from empirical practices in social and cognitive studies on human-human trust, we provide practical guidelines to improve the methodology of studying Human-AI trust in decision-making contexts. In addition, we bring forward research opportunities of two types: one focusing on further investigation regarding trust methodologies and the other on factors that impact Human-AI trust. CCS Concepts: • Human-centered computing → HCI theory, concepts and models.

document thumbnail

Par les mêmes auteurs

Sur les mêmes sujets

Sur les mêmes disciplines