Cooperative Guidance Strategy for Active Defense Spacecraft with Imperfect Information via Deep Reinforcement Learning

6 Dec 2022  ·  Li Zhi, Haizhao Liang, Jinze Wu, Jianying Wang, Yu Zheng ·

In this paper, an adaptive cooperative guidance strategy for the active protection of a target spacecraft trying to evade an interceptor was developed. The target spacecraft performs evasive maneuvers, launching an active defense vehicle to divert the interceptor. Instead of classical strategies, which are based on optimal control or differential game theory, the problem was solved by using the deep reinforcement learning method, and imperfect information was assumed for the interceptor maneuverability. To address the sparse reward problem, a universal reward design method and an increasingly difficult training approach were presented utilizing the shaping technique. Guidance law, reward function, and training approach were demonstrated through the learning process and Monte Carlo simulations. The application of the non-sparse reward function and increasingly difficult training approach accelerated the model convergence, alleviating the overfitting problem. Considering a standard optimal guidance law as a benchmark, the effectiveness, and the advantages, that guarantee the target spacecraft's escape and win rates in a multi-agent game, of the proposed guidance strategy were validated by the simulation results. The trained agent's adaptiveness to the interceptor maneuverability was superior to the optimal guidance law. Moreover, compared to the standard optimal guidance law, the proposed guidance strategy performed better with less prior knowledge.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here