Convergent and Efficient Deep Q Network Algorithm
Read::
- Convergent and Efficient Deep Q Network Algorithm Z.T. Wang, M. Ueda 2022 π« 2023-02-28 reading citation Print:: β Zotero Link:: NA PDF:: NA Files:: arXiv.org Snapshot; Wang_Ueda_2022_Convergent and Efficient Deep Q Network Algorithm.pdf; pdf Reading Note:: Z.T. Wang, M. Ueda (2022) Web Rip:: human
TABLE without id
file.link as "Related Files",
title as "Title",
type as "type"
FROM "" AND -"ZZ. planning"
WHERE citekey = "wangConvergentEfficientDeep2022"
SORT file.cday DESC
Abstract
Despite the empirical success of the deep Q network (DQN) reinforcement learning algorithm and its variants, DQN is still not well understood and it does not guarantee convergence. In this work, we show that DQN can indeed diverge and cease to operate in realistic settings. Although there exist gradient-based convergent methods, we show that they actually have inherent problems in learning dynamics which cause them to fail even in simple tasks. To overcome these problems, we propose a convergent DQN algorithm (C-DQN) that is guaranteed to converge and can work with large discount factors (0.9998). It learns robustly in difficult settings and can learn several difficult games in the Atari 2600 benchmark that DQN fails to solve. Our codes have been publicly released and can be used to reproduce our results.
Quick Reference
Top Comments
Letβs say grey is for overall comments
Tasks
Topics
Further Reading
β