Convergent and Efficient Deep Q Network Algorithm

Read::

TABLE without id
file.link as "Related Files",
title as "Title",
type as "type"
FROM "" AND -"ZZ. planning"
WHERE citekey = "wangConvergentEfficientDeep2022" 
SORT file.cday DESC

Abstract

Despite the empirical success of the deep Q network (DQN) reinforcement learning algorithm and its variants, DQN is still not well understood and it does not guarantee convergence. In this work, we show that DQN can indeed diverge and cease to operate in realistic settings. Although there exist gradient-based convergent methods, we show that they actually have inherent problems in learning dynamics which cause them to fail even in simple tasks. To overcome these problems, we propose a convergent DQN algorithm (C-DQN) that is guaranteed to converge and can work with large discount factors (0.9998). It learns robustly in difficult settings and can learn several difficult games in the Atari 2600 benchmark that DQN fails to solve. Our codes have been publicly released and can be used to reproduce our results.

Quick Reference

Top Comments

Let’s say grey is for overall comments

Tasks

Topics

Further Reading

β€”

Extracted Annotations and Comments

Figures