Search Results for author: Sarath Sreedharan

Found 25 papers, 3 papers with code

Towards customizable reinforcement learning agents: Enabling preference specification through online vocabulary expansion

no code implementations27 Oct 2022 Utkarsh Soni, Sarath Sreedharan, Mudit Verma, Lin Guan, Matthew Marquez, Subbarao Kambhampati

In this work, we propose PRESCA (PREference Specification through Concept Acquisition), a system that allows users to specify their preferences in terms of concepts that they understand.


A Mental-Model Centric Landscape of Human-AI Symbiosis

no code implementations18 Feb 2022 Zahra Zahedi, Sarath Sreedharan, Subbarao Kambhampati

Through this paper, we will see how this new framework allows us to capture the various works done in the space of human-AI interaction and identify the fundamental behavioral patterns supported by these works.

Leveraging Approximate Symbolic Models for Reinforcement Learning via Skill Diversity

1 code implementation6 Feb 2022 Lin Guan, Sarath Sreedharan, Subbarao Kambhampati

At the low level, we learn a set of diverse policies for each possible task subgoal identified by the landmark, which are then stitched together.


Symbols as a Lingua Franca for Bridging Human-AI Chasm for Explainable and Advisable AI Systems

no code implementations21 Sep 2021 Subbarao Kambhampati, Sarath Sreedharan, Mudit Verma, Yantian Zha, Lin Guan

The jury is still out on whether AI systems will need to use symbols in their internal reasoning to achieve general intelligence capabilities.

Not all users are the same: Providing personalized explanations for sequential decision making problems

no code implementations23 Jun 2021 Utkarsh Soni, Sarath Sreedharan, Subbarao Kambhampati

The former is achieved by a data-driven clustering approach while for the latter, we compile our explanation generation problem into a POMDP.

Decision Making Explanation Generation

GPT3-to-plan: Extracting plans from text using GPT-3

1 code implementation14 Jun 2021 Alberto Olmo, Sarath Sreedharan, Subbarao Kambhampati

Operations in many essential industries including finance and banking are often characterized by the need to perform repetitive sequential tasks.


Trust-Aware Planning: Modeling Trust Evolution in Longitudinal Human-Robot Interaction

no code implementations3 May 2021 Zahra Zahedi, Mudit Verma, Sarath Sreedharan, Subbarao Kambhampati

The problem of trust management is particularly challenging in mixed human-robot teams where the human and the robot may have different models about the task at hand and thus may have different expectations regarding the current course of action and forcing the robot to focus on the costly explicable behavior.


A Unifying Bayesian Formulation of Measures of Interpretability in Human-AI

no code implementations21 Apr 2021 Sarath Sreedharan, Anagha Kulkarni, David E. Smith, Subbarao Kambhampati

Existing approaches for generating human-aware agent behaviors have considered different measures of interpretability in isolation.

A Bayesian Account of Measures of Interpretability in Human-AI Interaction

no code implementations22 Nov 2020 Sarath Sreedharan, Anagha Kulkarni, Tathagata Chakraborti, David E. Smith, Subbarao Kambhampati

Existing approaches for the design of interpretable agent behavior consider different measures of interpretability in isolation.

Explainable Composition of Aggregated Assistants

no code implementations21 Nov 2020 Sarath Sreedharan, Tathagata Chakraborti, Yara Rizk, Yasaman Khazaeni

A new design of an AI assistant that has become increasingly popular is that of an "aggregated assistant" -- realized as an orchestrated composition of several individual skills or agents that can each perform atomic tasks.

Designing Environments Conducive to Interpretable Robot Behavior

no code implementations2 Jul 2020 Anagha Kulkarni, Sarath Sreedharan, Sarah Keren, Tathagata Chakraborti, David Smith, Subbarao Kambhampati

Given structured environments (like warehouses and restaurants), it may be possible to design the environment so as to boost the interpretability of the robot's behavior or to shape the human's expectations of the robot's behavior.

The Emerging Landscape of Explainable AI Planning and Decision Making

no code implementations26 Feb 2020 Tathagata Chakraborti, Sarath Sreedharan, Subbarao Kambhampati

In this paper, we provide a comprehensive outline of the different threads of work in Explainable AI Planning (XAIP) that has emerged as a focus area in the last couple of years and contrast that with earlier efforts in the field in terms of techniques, target users, and delivery mechanisms.

Decision Making

Bridging the Gap: Providing Post-Hoc Symbolic Explanations for Sequential Decision-Making Problems with Inscrutable Representations

no code implementations ICLR 2022 Sarath Sreedharan, Utkarsh Soni, Mudit Verma, Siddharth Srivastava, Subbarao Kambhampati

As increasingly complex AI systems are introduced into our daily lives, it becomes important for such systems to be capable of explaining the rationale for their decisions and allowing users to contest these decisions.

Decision Making Montezuma's Revenge

Expectation-Aware Planning: A Unifying Framework for Synthesizing and Executing Self-Explaining Plans for Human-Aware Planning

no code implementations18 Mar 2019 Sarath Sreedharan, Tathagata Chakraborti, Christian Muise, Subbarao Kambhampati

In this work, we present a new planning formalism called Expectation-Aware planning for decision making with humans in the loop where the human's expectations about an agent may differ from the agent's own model.

Decision Making

Hierarchical Expertise-Level Modeling for User Specific Robot-Behavior Explanations

no code implementations19 Feb 2018 Sarath Sreedharan, Siddharth Srivastava, Subbarao Kambhampati

There is a growing interest within the AI research community to develop autonomous systems capable of explaining their behavior to users.

Explanation Generation

Plan Explanations as Model Reconciliation -- An Empirical Study

no code implementations3 Feb 2018 Tathagata Chakraborti, Sarath Sreedharan, Sachin Grover, Subbarao Kambhampati

Recent work in explanation generation for decision making agents has looked at how unexplained behavior of autonomous systems can be understood in terms of differences in the model of the system and the human's understanding of the same, and how the explanation process as a result of this mismatch can be then seen as a process of reconciliation of these models.

Decision Making Explanation Generation

Balancing Explicability and Explanation in Human-Aware Planning

no code implementations1 Aug 2017 Tathagata Chakraborti, Sarath Sreedharan, Subbarao Kambhampati

In this paper, we bring these two concepts together and show how a planner can account for both these needs and achieve a trade-off during the plan generation process itself by means of a model-space search method MEGA.

Decision Making Explanation Generation

Plan Explanations as Model Reconciliation: Moving Beyond Explanation as Soliloquy

no code implementations28 Jan 2017 Tathagata Chakraborti, Sarath Sreedharan, Yu Zhang, Subbarao Kambhampati

When AI systems interact with humans in the loop, they are often called on to provide explanations for their plans and behavior.

Compliant Conditions for Polynomial Time Approximation of Operator Counts

no code implementations25 May 2016 Tathagata Chakraborti, Sarath Sreedharan, Sailik Sengupta, T. K. Satish Kumar, Subbarao Kambhampati

In this paper, we develop a computationally simpler version of the operator count heuristic for a particular class of domains.

Plan Explicability and Predictability for Robot Task Planning

no code implementations25 Nov 2015 Yu Zhang, Sarath Sreedharan, Anagha Kulkarni, Tathagata Chakraborti, Hankz Hankui Zhuo, Subbarao Kambhampati

Hence, for such agents to be helpful, one important requirement is for them to synthesize plans that can be easily understood by humans.

Motion Planning Robot Task Planning

Cannot find the paper you are looking for? You can Submit a new open access paper.