Argumentation-based Agents that Explain their Decisions

13 Sep 2020  ·  Mariela Morveli-Espinoza, Ayslan Possebom, Cesar Augusto Tacla ·

Explainable Artificial Intelligence (XAI) systems, including intelligent agents, must be able to explain their internal decisions, behaviours and reasoning that produce their choices to the humans (or other systems) with which they interact. In this paper, we focus on how an extended model of BDI (Beliefs-Desires-Intentions) agents can be able to generate explanations about their reasoning, specifically, about the goals he decides to commit to. Our proposal is based on argumentation theory, we use arguments to represent the reasons that lead an agent to make a decision and use argumentation semantics to determine acceptable arguments (reasons). We propose two types of explanations: the partial one and the complete one. We apply our proposal to a scenario of rescue robots.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here