Reinforcement Learning With Sparse-Executing Actions via Sparsity Regularization

Reinforcement learning (RL) has made remarkable progress in many decision-making tasks, such as Go, game playing, and robotics control. However, classic RL approaches often presume that all actions can be executed an infinite number of times, which is inconsistent with many decision-making scenarios in which actions have limited budgets or execution opportunities. Imagine an agent playing a gunfighting game with limited ammunition. It only fires when the enemy appears in the correct position, making shooting a sparse-executing action. Such sparse-executing action has not been considered by classic RL algorithms in problem formulation or effective algorithms design. This paper attempts to address sparse-executing action issues by first formalizing the problem as a Sparse Action Markov Decision Process (SA-MDP), in which certain actions in the action space can only be executed for limited amounts of time. Then, we propose a policy optimization algorithm called Action Sparsity REgularization (ASRE) that gives each action a distinct preference. ASRE evaluates action sparsity through constrained action sampling and regularizes policy training based on the evaluated action sparsity, represented by action distribution. Experiments on tasks with known sparse-executing actions, where classical RL algorithms struggle to train policy efficiently, ASRE effectively constrains the action sampling and outperforms baselines. Moreover, we present that ASRE can generally improve the performance in Atari games, demonstrating its broad applicability

Results in Papers With Code
(↓ scroll down to see all results)