In this work, we propose PRESCA (PREference Specification through Concept Acquisition), a system that allows users to specify their preferences in terms of concepts that they understand.
Recent advances in large language models (LLMs) have transformed the field of natural language processing (NLP).
Through this paper, we will see how this new framework allows us to capture the various works done in the space of human-AI interaction and identify the fundamental behavioral patterns supported by these works.
At the low level, we learn a set of diverse policies for each possible task subgoal identified by the landmark, which are then stitched together.
The jury is still out on whether AI systems will need to use symbols in their internal reasoning to achieve general intelligence capabilities.
The former is achieved by a data-driven clustering approach while for the latter, we compile our explanation generation problem into a POMDP.
Operations in many essential industries including finance and banking are often characterized by the need to perform repetitive sequential tasks.
The problem of trust management is particularly challenging in mixed human-robot teams where the human and the robot may have different models about the task at hand and thus may have different expectations regarding the current course of action and forcing the robot to focus on the costly explicable behavior.
Existing approaches for generating human-aware agent behaviors have considered different measures of interpretability in isolation.
Existing approaches for the design of interpretable agent behavior consider different measures of interpretability in isolation.
A new design of an AI assistant that has become increasingly popular is that of an "aggregated assistant" -- realized as an orchestrated composition of several individual skills or agents that can each perform atomic tasks.
Decision support systems seek to enable informed decision-making.
Given structured environments (like warehouses and restaurants), it may be possible to design the environment so as to boost the interpretability of the robot's behavior or to shape the human's expectations of the robot's behavior.
In this paper, we provide a comprehensive outline of the different threads of work in Explainable AI Planning (XAIP) that has emerged as a focus area in the last couple of years and contrast that with earlier efforts in the field in terms of techniques, target users, and delivery mechanisms.
As increasingly complex AI systems are introduced into our daily lives, it becomes important for such systems to be capable of explaining the rationale for their decisions and allowing users to contest these decisions.
Explainable planning is widely accepted as a prerequisite for autonomous agents to successfully work with humans.
In this work, we present a new planning formalism called Expectation-Aware planning for decision making with humans in the loop where the human's expectations about an agent may differ from the agent's own model.
There has been significant interest of late in generating behavior of agents that is interpretable to the human (observer) in the loop.
There is a growing interest within the AI research community to develop autonomous systems capable of explaining their behavior to users.
Recent work in explanation generation for decision making agents has looked at how unexplained behavior of autonomous systems can be understood in terms of differences in the model of the system and the human's understanding of the same, and how the explanation process as a result of this mismatch can be then seen as a process of reconciliation of these models.
In this paper, we bring these two concepts together and show how a planner can account for both these needs and achieve a trade-off during the plan generation process itself by means of a model-space search method MEGA.
When AI systems interact with humans in the loop, they are often called on to provide explanations for their plans and behavior.
In this paper, we develop a computationally simpler version of the operator count heuristic for a particular class of domains.
Hence, for such agents to be helpful, one important requirement is for them to synthesize plans that can be easily understood by humans.