Search Results for author: Sarath Sreedharan

Found 33 papers, 6 papers with code

Handling Reward Misspecification in the Presence of Expectation Mismatch

no code implementations12 Apr 2024 Sarath Sreedharan, Malek Mechergui

Detecting and handling misspecified objectives, such as reward functions, has been widely recognized as one of the central challenges within the domain of Artificial Intelligence (AI) safety research.

Can LLMs Fix Issues with Reasoning Models? Towards More Likely Models for AI Planning

no code implementations22 Nov 2023 Turgay Caglar, Sirine Belhaj, Tathagata Chakraborti, Michael Katz, Sarath Sreedharan

This is the first work to look at the application of large language models (LLMs) for the purpose of model space edits in automated planning tasks.

TOBY: A Tool for Exploring Data in Academic Survey Papers

1 code implementation13 Jun 2023 Tathagata Chakraborti, Jungkoo Kang, Christian Muise, Sarath Sreedharan, Michael Walker, Daniel Szafir, Tom Williams

This paper describes TOBY, a visualization tool that helps a user explore the contents of an academic survey paper.

On the Planning Abilities of Large Language Models : A Critical Investigation

2 code implementations25 May 2023 Karthik Valmeekam, Matthew Marquez, Sarath Sreedharan, Subbarao Kambhampati

We aim to evaluate (1) the effectiveness of LLMs in generating plans autonomously in commonsense planning tasks and (2) the potential of LLMs in LLM-Modulo settings where they act as a source of heuristic guidance for external planners and verifiers.

Planning for Attacker Entrapment in Adversarial Settings

1 code implementation1 Mar 2023 Brittany Cates, Anagha Kulkarni, Sarath Sreedharan

In this paper, we propose a planning framework to generate a defense strategy against an attacker who is working in an environment where a defender can operate without the attacker's knowledge.

Goal Alignment: A Human-Aware Account of Value Alignment Problem

no code implementations2 Feb 2023 Malek Mechergui, Sarath Sreedharan

To address this lacuna, we propose a novel formulation for the value alignment problem, named goal alignment that focuses on a few central challenges related to value alignment.

A Mental Model Based Theory of Trust

no code implementations29 Jan 2023 Zahra Zahedi, Sarath Sreedharan, Subbarao Kambhampati

Handling trust is one of the core requirements for facilitating effective interaction between the human and the AI agent.

Decision Making

A Mental-Model Centric Landscape of Human-AI Symbiosis

no code implementations18 Feb 2022 Zahra Zahedi, Sarath Sreedharan, Subbarao Kambhampati

Through this paper, we will see how this new framework allows us to capture the various works done in the space of human-AI interaction and identify the fundamental behavioral patterns supported by these works.

Leveraging Approximate Symbolic Models for Reinforcement Learning via Skill Diversity

1 code implementation6 Feb 2022 Lin Guan, Sarath Sreedharan, Subbarao Kambhampati

At the low level, we learn a set of diverse policies for each possible task subgoal identified by the landmark, which are then stitched together.

reinforcement-learning Reinforcement Learning (RL)

Symbols as a Lingua Franca for Bridging Human-AI Chasm for Explainable and Advisable AI Systems

no code implementations21 Sep 2021 Subbarao Kambhampati, Sarath Sreedharan, Mudit Verma, Yantian Zha, Lin Guan

The jury is still out on whether AI systems will need to use symbols in their internal reasoning to achieve general intelligence capabilities.

Not all users are the same: Providing personalized explanations for sequential decision making problems

no code implementations23 Jun 2021 Utkarsh Soni, Sarath Sreedharan, Subbarao Kambhampati

The former is achieved by a data-driven clustering approach while for the latter, we compile our explanation generation problem into a POMDP.

Clustering Decision Making +1

GPT3-to-plan: Extracting plans from text using GPT-3

1 code implementation14 Jun 2021 Alberto Olmo, Sarath Sreedharan, Subbarao Kambhampati

Operations in many essential industries including finance and banking are often characterized by the need to perform repetitive sequential tasks.

Translation

Trust-Aware Planning: Modeling Trust Evolution in Iterated Human-Robot Interaction

no code implementations3 May 2021 Zahra Zahedi, Mudit Verma, Sarath Sreedharan, Subbarao Kambhampati

The problem of trust management is particularly challenging in mixed human-robot teams where the human and the robot may have different models about the task at hand and thus may have different expectations regarding the current course of action, thereby forcing the robot to focus on the costly explicable behavior.

Management

A Unifying Bayesian Formulation of Measures of Interpretability in Human-AI

no code implementations21 Apr 2021 Sarath Sreedharan, Anagha Kulkarni, David E. Smith, Subbarao Kambhampati

Existing approaches for generating human-aware agent behaviors have considered different measures of interpretability in isolation.

A Bayesian Account of Measures of Interpretability in Human-AI Interaction

no code implementations22 Nov 2020 Sarath Sreedharan, Anagha Kulkarni, Tathagata Chakraborti, David E. Smith, Subbarao Kambhampati

Existing approaches for the design of interpretable agent behavior consider different measures of interpretability in isolation.

Explainable Composition of Aggregated Assistants

no code implementations21 Nov 2020 Sarath Sreedharan, Tathagata Chakraborti, Yara Rizk, Yasaman Khazaeni

A new design of an AI assistant that has become increasingly popular is that of an "aggregated assistant" -- realized as an orchestrated composition of several individual skills or agents that can each perform atomic tasks.

Designing Environments Conducive to Interpretable Robot Behavior

no code implementations2 Jul 2020 Anagha Kulkarni, Sarath Sreedharan, Sarah Keren, Tathagata Chakraborti, David Smith, Subbarao Kambhampati

Given structured environments (like warehouses and restaurants), it may be possible to design the environment so as to boost the interpretability of the robot's behavior or to shape the human's expectations of the robot's behavior.

The Emerging Landscape of Explainable AI Planning and Decision Making

no code implementations26 Feb 2020 Tathagata Chakraborti, Sarath Sreedharan, Subbarao Kambhampati

In this paper, we provide a comprehensive outline of the different threads of work in Explainable AI Planning (XAIP) that has emerged as a focus area in the last couple of years and contrast that with earlier efforts in the field in terms of techniques, target users, and delivery mechanisms.

Decision Making

Bridging the Gap: Providing Post-Hoc Symbolic Explanations for Sequential Decision-Making Problems with Inscrutable Representations

no code implementations ICLR 2022 Sarath Sreedharan, Utkarsh Soni, Mudit Verma, Siddharth Srivastava, Subbarao Kambhampati

As increasingly complex AI systems are introduced into our daily lives, it becomes important for such systems to be capable of explaining the rationale for their decisions and allowing users to contest these decisions.

Decision Making Montezuma's Revenge

Expectation-Aware Planning: A Unifying Framework for Synthesizing and Executing Self-Explaining Plans for Human-Aware Planning

no code implementations18 Mar 2019 Sarath Sreedharan, Tathagata Chakraborti, Christian Muise, Subbarao Kambhampati

In this work, we present a new planning formalism called Expectation-Aware planning for decision making with humans in the loop where the human's expectations about an agent may differ from the agent's own model.

Decision Making

Hierarchical Expertise-Level Modeling for User Specific Robot-Behavior Explanations

no code implementations19 Feb 2018 Sarath Sreedharan, Siddharth Srivastava, Subbarao Kambhampati

There is a growing interest within the AI research community to develop autonomous systems capable of explaining their behavior to users.

Explanation Generation

Plan Explanations as Model Reconciliation -- An Empirical Study

no code implementations3 Feb 2018 Tathagata Chakraborti, Sarath Sreedharan, Sachin Grover, Subbarao Kambhampati

Recent work in explanation generation for decision making agents has looked at how unexplained behavior of autonomous systems can be understood in terms of differences in the model of the system and the human's understanding of the same, and how the explanation process as a result of this mismatch can be then seen as a process of reconciliation of these models.

Decision Making Explanation Generation

Balancing Explicability and Explanation in Human-Aware Planning

no code implementations1 Aug 2017 Tathagata Chakraborti, Sarath Sreedharan, Subbarao Kambhampati

In this paper, we bring these two concepts together and show how a planner can account for both these needs and achieve a trade-off during the plan generation process itself by means of a model-space search method MEGA.

Decision Making Explanation Generation

Plan Explanations as Model Reconciliation: Moving Beyond Explanation as Soliloquy

no code implementations28 Jan 2017 Tathagata Chakraborti, Sarath Sreedharan, Yu Zhang, Subbarao Kambhampati

When AI systems interact with humans in the loop, they are often called on to provide explanations for their plans and behavior.

Compliant Conditions for Polynomial Time Approximation of Operator Counts

no code implementations25 May 2016 Tathagata Chakraborti, Sarath Sreedharan, Sailik Sengupta, T. K. Satish Kumar, Subbarao Kambhampati

In this paper, we develop a computationally simpler version of the operator count heuristic for a particular class of domains.

Plan Explicability and Predictability for Robot Task Planning

no code implementations25 Nov 2015 Yu Zhang, Sarath Sreedharan, Anagha Kulkarni, Tathagata Chakraborti, Hankz Hankui Zhuo, Subbarao Kambhampati

Hence, for such agents to be helpful, one important requirement is for them to synthesize plans that can be easily understood by humans.

Motion Planning Robot Task Planning

Cannot find the paper you are looking for? You can Submit a new open access paper.