Trust in an autonomous agent for predictive maintenance: how agent transparency could impact compliance.

Fiche du document

Date

24 juillet 2022

Type de document
Périmètre
Langue
Identifiants
Relations

Ce document est lié à :
info:eu-repo/semantics/altIdentifier/doi/10.54941/ahfe1001602

Collection

Archives ouvertes

Licence

info:eu-repo/semantics/OpenAccess



Sujets proches En

Transparence

Citer ce document

Loïck Simon et al., « Trust in an autonomous agent for predictive maintenance: how agent transparency could impact compliance. », HAL SHS (Sciences de l’Homme et de la Société), ID : 10.54941/ahfe1001602


Métriques


Partage / Export

Résumé En

Human-machine cooperation is more and more present in the industry. Machines will be sources of proposal by giving human propositions and advice. Humans will need to make a decision (comply, i.e., agree, or not) with those propositions. Compliance can be seen as an objective trust and experiments results unclear about the role of risk in this compliance. We wanted to understand how transparency on reliability, riskor those two in addition will impact this compliance with machine propositions. With the use of an AI for predictive maintenance, we asked participants to make a decision about a proposition of replanification. Preliminary results show that transparency on risk and total transparency are linked with less compliance with the AI. We can see that risk transparency has more effect on creating an appropriate trust than reliability transparency. As we see, and in agreement with recent studies, there is a need to understand at a finer level the impact of transparency on human-machines interaction

document thumbnail

Par les mêmes auteurs

Sur les mêmes sujets

Sur les mêmes disciplines