Context-dependent outcome encoding in human reinforcement learning

Fiche du document

Type de document
Périmètre
Langue
Identifiants
Relations

Ce document est lié à :
info:eu-repo/semantics/altIdentifier/doi/10.1016/j.cobeha.2021.06.006

Collection

Archives ouvertes

Licences

http://creativecommons.org/licenses/by-nc-nd/ , info:eu-repo/semantics/OpenAccess



Sujets proches En

Proof

Citer ce document

Stefano Palminteri et al., « Context-dependent outcome encoding in human reinforcement learning », HAL SHS (Sciences de l’Homme et de la Société), ID : 10.1016/j.cobeha.2021.06.006


Métriques


Partage / Export

Résumé En

A wealth of evidence in perceptual and economic decision-making research suggests that the subjective assessment of one option is influenced by the context. A series of studies provides evidence that the same coding principles apply to situations where decisions are shaped by past outcomes, that is, in reinforcement-learning situations. In bandit tasks, human behavior is explained by models assuming that individuals do not learn the objective value of an outcome, but rather its subjective, context-dependent representation. We argue that, while such outcome context-dependence may be informationally or ecologically optimal, it concomitantly undermines the capacity to generalize value-based knowledge to new contexts - sometimes creating apparent decision paradoxes.

document thumbnail

Par les mêmes auteurs

Sur les mêmes sujets