Inference of Human's Observation Strategy for Monitoring Robot's Behavior based on a Game-Theoretic Model of Trust

1 Mar 2019  ·  Zahra Zahedi, Sailik Sengupta, Subbarao Kambhampati ·

We consider scenarios where a worker robot, who may be unaware of the human's exact expectations, may have the incentive to deviate from a preferred plan (e.g. safe but costly) when a human supervisor is not monitoring it. On the other hand, continuous monitoring of the robot's behavior is often difficult for humans because it costs them valuable resources (e.g., time, cognitive overload, etc.). Thus, to optimize the cost of monitoring while ensuring the robots follow the {\em safe} behavior and to assist the human to deal with the possible unsafe robots, we model this problem in a game-theoretic framework of trust. In settings where the human does not initially trust the robot, pure-strategy Nash Equilibrium provides a useful policy for the human. Unfortunately, we show the formulated game often lacks a pure strategy Nash equilibrium. Thus, we define the concept of a trust boundary over the mixed strategy space of the human and show that it helps to discover optimal monitoring strategies. We conduct humans subject studies that demonstrate (1) the need for coming up with optimal monitoring strategies, and (2) the benefits of using strategies suggested by our approach.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here