Deep Reinforcement Learning: A Survey

Read::

  • Deep Reinforcement Learning: A Survey X. Wang, S. Wang, X. Liang, D. Zhao, J. Huang, X. Xu, B. Dai, Q. Miao 2022 πŸ›« 2023-02-19 reading citation

Print:: ❌ Print:: βœ” Zotero Link:: NA PDF:: NA Files:: IEEE Xplore Abstract Record; Wang et al_2022_Deep Reinforcement Learning.pdf Reading Note:: X. Wang, S. Wang, X. Liang, D. Zhao, J. Huang, X. Xu, B. Dai, Q. Miao (2022) Web Rip::

TABLE without id
file.link as "Related Files",
title as "Title",
type as "type"
FROM "" AND -"ZZ. planning"
WHERE citekey = "NA" 
SORT file.cday DESC

Abstract

Deep reinforcement learning (DRL) integrates the feature representation ability of deep learning with the decision-making ability of reinforcement learning so that it can achieve powerful end-to-end learning control capabilities. In the past decade, DRL has made substantial advances in many tasks that require perceiving high-dimensional input and making optimal or near-optimal decisions. However, there are still many challenging problems in the theory and applications of DRL, especially in learning control tasks with limited samples, sparse rewards, and multiple agents. Researchers have proposed various solutions and new theories to solve these problems and promote the development of DRL. In addition, deep learning has stimulated the further development of many subfields of reinforcement learning, such as hierarchical reinforcement learning (HRL), multiagent reinforcement learning, and imitation learning. This article gives a comprehensive overview of the fundamental theories, key algorithms, and primary research domains of DRL. In addition to value-based and policy-based DRL algorithms, the advances in maximum entropy-based DRL are summarized. The future research topics of DRL are also analyzed and discussed.

Quick Reference

Top Comments

Let’s say grey is for overall comments

Tasks

Topics

Further Reading

β€”

Extracted Annotations and Comments

Figures