Reinforcement Learning With Sparse-Executing Actions via Sparsity Regularization

18 May 2021  ·  Jing-Cheng Pang, Tian Xu, Shengyi Jiang, Yu-Ren Liu, Yang Yu ·

Reinforcement learning (RL) has made remarkable progress in many decision-making tasks, such as Go, game playing, and robotics control. However, classic RL approaches often presume that all actions can be executed an infinite number of times, which is inconsistent with many decision-making scenarios in which actions have limited budgets or execution opportunities. Imagine an agent playing a gunfighting game with limited ammunition. It only fires when the enemy appears in the correct position, making shooting a sparse-executing action. Such sparse-executing action has not been considered by classic RL algorithms in problem formulation or effective algorithms design. This paper attempts to address sparse-executing action issues by first formalizing the problem as a Sparse Action Markov Decision Process (SA-MDP), in which certain actions in the action space can only be executed for limited amounts of time. Then, we propose a policy optimization algorithm called Action Sparsity REgularization (ASRE) that gives each action a distinct preference. ASRE evaluates action sparsity through constrained action sampling and regularizes policy training based on the evaluated action sparsity, represented by action distribution. Experiments on tasks with known sparse-executing actions, where classical RL algorithms struggle to train policy efficiently, ASRE effectively constrains the action sampling and outperforms baselines. Moreover, we present that ASRE can generally improve the performance in Atari games, demonstrating its broad applicability

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here