# Distal Explanations for Model-free Explainable Reinforcement Learning

28 Jan 2020  ·  , , , ·

In this paper we introduce and evaluate a distal explanation model for model-free reinforcement learning agents that can generate explanations for why' and why not' questions. Our starting point is the observation that causal models can generate opportunity chains that take the form of `A enables B and B causes C'... Using insights from an analysis of 240 explanations generated in a human-agent experiment, we define a distal explanation model that can analyse counterfactuals and opportunity chains using decision trees and causal models. A recurrent neural network is employed to learn opportunity chains, and decision trees are used to improve the accuracy of task prediction and the generated counterfactuals. We computationally evaluate the model in 6 reinforcement learning benchmarks using different reinforcement learning algorithms. From a study with 90 human participants, we show that our distal explanation model results in improved outcomes over three scenarios compared with two baseline explanation models. read more

PDF Abstract

## Code Add Remove Mark official

No code implementations yet. Submit your code now

## Datasets

Add Datasets introduced or used in this paper

## Results from the Paper Edit

Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.