When AI systems interact with humans in the loop, they are often called on to
provide explanations for their plans and behavior. Past work on plan
explanations primarily involved the AI system explaining the correctness of its
plan and the rationale for its decision in terms of its own model...
soliloquy is wholly inadequate in most realistic scenarios where the humans
have domain and task models that differ significantly from that used by the AI
system. We posit that the explanations are best studied in light of these
differing models. In particular, we show how explanation can be seen as a
"model reconciliation problem" (MRP), where the AI system in effect suggests
changes to the human's model, so as to make its plan be optimal with respect to
that changed human model. We will study the properties of such explanations,
present algorithms for automatically computing them, and evaluate the
performance of the algorithms.