Quantifying Retrospective Human Responsibility in Intelligent Systems

Fiche du document

Date

3 août 2023

Type de document
Périmètre
Identifiant
  • 2308.01752
Collection

arXiv

Organisation

Cornell University




Citer ce document

Nir Douer et al., « Quantifying Retrospective Human Responsibility in Intelligent Systems », arXiv - économie


Partage / Export

Résumé 0

Intelligent systems have become a major part of our lives. Human responsibility for outcomes becomes unclear in the interaction with these systems, as parts of information acquisition, decision-making, and action implementation may be carried out jointly by humans and systems. Determining human causal responsibility with intelligent systems is particularly important in events that end with adverse outcomes. We developed three measures of retrospective human causal responsibility when using intelligent systems. The first measure concerns repetitive human interactions with a system. Using information theory, it quantifies the average human's unique contribution to the outcomes of past events. The second and third measures concern human causal responsibility in a single past interaction with an intelligent system. They quantify, respectively, the unique human contribution in forming the information used for decision-making and the reasonability of the actions that the human carried out. The results show that human retrospective responsibility depends on the combined effects of system design and its reliability, the human's role and authority, and probabilistic factors related to the system and the environment. The new responsibility measures can serve to investigate and analyze past events involving intelligent systems. They may aid the judgment of human responsibility and ethical and legal discussions, providing a novel quantitative perspective.

document thumbnail

Par les mêmes auteurs

Sur les mêmes sujets

Sur les mêmes disciplines

Exporter en