Benchmarking Perturbation-based Saliency Maps for Explaining Atari Agents

18 Jan 2021  ·  Tobias Huber, Benedikt Limmer, Elisabeth André ·

One of the most prominent methods for explaining the behavior of Deep Reinforcement Learning (DRL) agents is the generation of saliency maps that show how much each pixel attributed to the agents' decision. However, there is no work that computationally evaluates and compares the fidelity of different saliency map approaches specifically for DRL agents. It is particularly challenging to computationally evaluate saliency maps for DRL agents since their decisions are part of an overarching policy. For instance, the output neurons of value-based DRL algorithms encode both the value of the current state as well as the value of doing each action in this state. This ambiguity should be considered when evaluating saliency maps for such agents. In this paper, we compare five popular perturbation-based approaches to create saliency maps for DRL agents trained on four different Atari 2600 games. The approaches are compared using two computational metrics: dependence on the learned parameters of the agent (sanity checks) and fidelity to the agent's reasoning (input degradation). During the sanity checks, we encounter issues with one approach and propose a solution to fix these issues. For fidelity, we identify two main factors that influence which saliency approach should be chosen in which situation.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods