Interval timing in deep reinforcement learning agents
Read:: Project:: [] Print:: ❌
- print Zotero Link:: NA PDF:: NA Files:: Deverett et al_2019_Interval timing in deep reinforcement learning agents.pdf; Semantic Scholar Link Reading Note:: B. Deverett, R. Faulkner, Meire Fortunato, Greg Wayne, Joel Z. Leibo 2019 Web Rip::
TABLE without id
file.link as "Related Files",
title as "Title",
type as "type"
FROM "" AND -"ZZ. planning"
SORT file.cday DESC
Abstract
This work characterize the strategies developed by recurrent and feedforward agents, which both succeed at temporal reproduction using distinct mechanisms, some of which bear specific and intriguing similarities to biological systems. The measurement of time is central to intelligent behavior. We know that both animals and artificial agents can successfully use temporal dependencies to select actions. In artificial agents, little work has directly addressed (1) which architectural components are necessary for successful development of this ability, (2) how this timing ability comes to be represented in the units and actions of the agent, and (3) whether the resulting behavior of the system converges on solutions similar to those of biology. Here we studied interval timing abilities in deep reinforcement learning agents trained end-to-end on an interval reproduction paradigm inspired by experimental literature on mechanisms of timing. We characterize the strategies developed by recurrent and feedforward agents, which both succeed at temporal reproduction using distinct mechanisms, some of which bear specific and intriguing similarities to biological systems. These findings advance our understanding of how agents come to represent time, and they highlight the value of experimentally inspired approaches to characterizing agent abilities.
Quick Reference
Top Comments
Topics
Tasks
Further reading
@jazayeriNeuralMechanismSensing2015
—