In reinforcement learning an agent may explore ineffectively when dealing with sparse reward tasks where finding a reward point is difficult.To solve the problem,we propose an algorithm called hierarchical deep reinforcement learning with automatic sub-goal identification via computer vision(HADS)which takes advantage of hierarchical reinforcement learning to alleviate the sparse reward problem and improve efficiency of exploration by utilizing a sub-goal mechanism.HADS uses a computer vision method to identify sub-goals automatically for hierarchical deep reinforcement learning.Due to the fact that not all sub-goal points are reachable,a mechanism is proposed to remove unreachable sub-goal points so as to further improve the performance of the algorithm.HADS involves contour recognition to identify sub-goals from the state image where some salient states in the state image may be recognized as sub-goals,while those that are not will be removed based on prior knowledge.Our experiments verified the effect of the algorithm.
1Introduction and main contributions Deep reinforcement learning that considers the advantages of both deep learning and reinforcement learning has achieved success in many fields[1],However,during the learning process,a possibility still exists that the agent fails in the task because of falling into hazardous states due to taking improper actions.
针对基于查询表的Dyna优化算法在大规模状态空间中收敛速度慢、环境模型难以表征以及对变化环境的学习滞后性等问题,提出一种新的基于近似模型表示的启发式Dyna优化算法(a heuristic Dyna optimization algorithm using approximate model representation,HDyna-AMR),其利用线性函数近似逼近Q值函数,采用梯度下降方法求解最优值函数.HDyna-AMR算法可以分为学习阶段和规划阶段.在学习阶段,利用agent与环境的交互样本近似表示环境模型并记录特征出现频率;在规划阶段,基于近似环境模型进行值函数的规划学习,并根据模型逼近过程中记录的特征出现频率设定额外奖赏.从理论的角度证明了HDyna-AMR的收敛性.将算法用于扩展的Boyan chain问题和Mountain car问题.实验结果表明,HDyna-AMR在离散状态空间和连续状态空间问题中能学习到最优策略,同时与Dyna-LAPS(Dyna-style planning with linear approximation and prioritized sweeping)和Sarsa(λ)相比,HDyna-AMR具有收敛速度快以及对变化环境的近似模型修正及时的优点.
1 Introduction.Deep reinforcement learning has achieved great success especially in game[1]and control areas.Unfortunately,for real world environments that involve more than one objectives.For example,an autonomous car should consider constraints such as the driving speed,energy efficiency,comfort and safety of the passengers[2,3].To solve the problems,Constrained Markov Decision Process(CMDP)was proposed to model tasks with constraints[4,5].