Stimulus Representation and the Timing of Reward-Prediction Errors in Models of the Dopamine System
Read::
- Stimulus Representation and the Timing of Reward-Prediction Errors in Models of the Dopamine System E.A. Ludvig, R.S. Sutton, E.J. Kehoe 2008 π« 2023-05-26 reading citation Print:: β Zotero Link:: Ludvig et al. - 2008 - Stimulus Representation and the Timing of Reward-P.pdf PDF:: NA Files:: Ludvig et al. - 2008 - Stimulus Representation and the Timing of Reward-P.pdf Reading Note:: Web Rip::
TABLE without id
file.link as "Related Files",
title as "Title",
type as "type"
FROM "" AND -"ZZ. planning"
WHERE citekey = "ludvigStimulusRepresentationTiming2008"
SORT file.cday DESC
Abstract
The phasic firing of dopamine neurons has been theorized to encode a reward-prediction error as formalized by the temporal-difference (TD) algorithm in reinforcement learning. Most TD models of dopamine have assumed a stimulus representation, known as the complete serial compound, in which each moment in a trial is distinctly represented. We introduce a more realistic temporal stimulus representation for the TD model. In our model, all external stimuli, including rewards, spawn a series of internal microstimuli, which grow weaker and more diffuse over time. These microstimuli are used by the TD learning algorithm to generate predictions of future reward. This new stimulus representation injects temporal generalization into the TD model and enhances correspondence between model and data in several experiments, including those when rewards are omitted or received early. This improved fit mostly derives from the absence of large negative errors in the new model, suggesting that dopamine alone can encode the full range of TD errors in these situations.
Quick Reference
Top Comments
Letβs say grey is for overall comments
Tasks
Topics
Further Reading
β