The Moral Psychology of Artificial Intelligence

Fiche du document

Type de document
Périmètre
Langue
Relations

Ce document est lié à :
info:eu-repo/semantics/altIdentifier/doi/10.1146/annurev-psych-030123-113559

Collection

Archives ouvertes



Citer ce document

Jean-François Bonnefon et al., « The Moral Psychology of Artificial Intelligence », HAL-SHS : économie et finance, ID : 10.1146/annurev-psych-030123-113559


Métriques


Partage / Export

Résumé En

Moral psychology was shaped around three categories of agents and patients: humans, other animals, and supernatural beings. Rapid progress in artificial intelligence has introduced a fourth category for our moral psychology to deal with: intelligent machines. Machines can perform as moral agents, making decisions that affect the outcomes of human patients or solving moral dilemmas without human supervision. Machines can be perceived as moral patients, whose outcomes can be affected by human decisions, with important consequences for human–machine cooperation. Machines can be moral proxies that human agents and patients send as their delegates to moral interactions or use as a disguise in these interactions. Here we review the experimental literature on machines as moral agents, moral patients, and moral proxies, with a focus on recent findings and the open questions that they suggest.

document thumbnail

Par les mêmes auteurs

Sur les mêmes sujets

Sur les mêmes disciplines

Exporter en