Search Results for author: Jivko Sinapov

Found 19 papers, 6 papers with code

Logical Specifications-guided Dynamic Task Sampling for Reinforcement Learning Agents

1 code implementation6 Feb 2024 Yash Shukla, Tanushree Burman, Abhishek Kulkarni, Robert Wright, Alvaro Velasquez, Jivko Sinapov

In this work, we propose a novel approach, called Logical Specifications-guided Dynamic Task Sampling (LSTS), that learns a set of RL policies to guide an agent from an initial state to a goal state based on a high-level task specification, while minimizing the number of environmental interactions.

Continuous Control Decision Making +3

NovelGym: A Flexible Ecosystem for Hybrid Planning and Learning Agents Designed for Open Worlds

no code implementations7 Jan 2024 Shivam Goel, Yichen Wei, Panagiotis Lymperopoulos, Matthias Scheutz, Jivko Sinapov

To this end, we introduce NovelGym, a flexible and adaptable ecosystem designed to simulate gridworld environments, serving as a robust platform for benchmarking reinforcement learning (RL) and hybrid planning and learning agents in open-world contexts.

Autonomous Vehicles Benchmarking +1

LgTS: Dynamic Task Sampling using LLM-generated sub-goals for Reinforcement Learning Agents

no code implementations14 Oct 2023 Yash Shukla, Wenchang Gao, Vasanth Sarathy, Alvaro Velasquez, Robert Wright, Jivko Sinapov

In this work, we propose LgTS (LLM-guided Teacher-Student learning), a novel approach that explores the planning abilities of LLMs to provide a graphical representation of the sub-goals to a reinforcement learning (RL) agent that does not have access to the transition dynamics of the environment.

Reinforcement Learning (RL)

A Framework for Few-Shot Policy Transfer through Observation Mapping and Behavior Cloning

1 code implementation13 Oct 2023 Yash Shukla, Bharat Kesari, Shivam Goel, Robert Wright, Jivko Sinapov

We use Generative Adversarial Networks (GANs) along with a cycle-consistency loss to map the observations between the source and target domains and later use this learned mapping to clone the successful source task behavior policy to the target domain.

Transfer Learning

Automaton-Guided Curriculum Generation for Reinforcement Learning Agents

1 code implementation11 Apr 2023 Yash Shukla, Abhishek Kulkarni, Robert Wright, Alvaro Velasquez, Jivko Sinapov

Experiments in gridworld and physics-based simulated robotics domains show that the curricula produced by AGCL achieve improved time-to-threshold performance on a complex sequential decision-making problem relative to state-of-the-art curriculum learning (e. g, teacher-student, self-play) and automaton-guided reinforcement learning baselines (e. g, Q-Learning for Reward Machines).

Decision Making Q-Learning +2

Methods and Mechanisms for Interactive Novelty Handling in Adversarial Environments

no code implementations28 Feb 2023 Tung Thai, Ming Shen, Mayank Garg, Ayush Kalani, Nakul Vaidya, Utkarsh Soni, Mudit Verma, Sriram Gopalakrishnan, Neeraj Varshney, Chitta Baral, Subbarao Kambhampati, Jivko Sinapov, Matthias Scheutz

Learning to detect, characterize and accommodate novelties is a challenge that agents operating in open-world domains need to address to be able to guarantee satisfactory task performance.

Novelty Detection

Knowledge-driven Scene Priors for Semantic Audio-Visual Embodied Navigation

no code implementations21 Dec 2022 Gyan Tatiya, Jonathan Francis, Luca Bondi, Ingrid Navarro, Eric Nyberg, Jivko Sinapov, Jean Oh

We also define a new audio-visual navigation sub-task, where agents are evaluated on novel sounding objects, as opposed to unheard clips of known objects.

Visual Navigation

RAPid-Learn: A Framework for Learning to Recover for Handling Novelties in Open-World Environments

1 code implementation24 Jun 2022 Shivam Goel, Yash Shukla, Vasanth Sarathy, Matthias Scheutz, Jivko Sinapov

We propose RAPid-Learn: Learning to Recover and Plan Again, a hybrid planning and learning method, to tackle the problem of adapting to sudden and unexpected changes in an agent's environment (i. e., novelties).

Transfer Learning

Creative Problem Solving in Artificially Intelligent Agents: A Survey and Framework

no code implementations21 Apr 2022 Evana Gizzi, Lakshmi Nair, Sonia Chernova, Jivko Sinapov

Creative Problem Solving (CPS) is a sub-area within Artificial Intelligence (AI) that focuses on methods for solving off-nominal, or anomalous problems in autonomous systems.

An Augmented Reality Platform for Introducing Reinforcement Learning to K-12 Students with Robots

no code implementations10 Oct 2021 Ziyi Zhang, Samuel Micah Akai-Nettey, Adonai Addo, Chris Rogers, Jivko Sinapov

To create a common ground between the human and the learning robot, in this paper, we propose an Augmented Reality (AR) system that reveals the hidden state of the learning to the human users.

reinforcement-learning Reinforcement Learning (RL)

A Framework for Multisensory Foresight for Embodied Agents

1 code implementation15 Sep 2021 Xiaohui Chen, Ramtin Hosseini, Karen Panetta, Jivko Sinapov

The framework was tested and validated with a dataset containing 4 sensory modalities (vision, haptic, audio, and tactile) on a humanoid robot performing 9 behaviors multiple times on a large set of objects.

Autonomous Vehicles

SPOTTER: Extending Symbolic Planning Operators through Targeted Reinforcement Learning

no code implementations24 Dec 2020 Vasanth Sarathy, Daniel Kasenberg, Shivam Goel, Jivko Sinapov, Matthias Scheutz

Symbolic planning models allow decision-making agents to sequence actions in arbitrary ways to achieve a variety of goals in dynamic domains.

Decision Making reinforcement-learning +1

Curriculum Learning for Reinforcement Learning Domains: A Framework and Survey

no code implementations10 Mar 2020 Sanmit Narvekar, Bei Peng, Matteo Leonetti, Jivko Sinapov, Matthew E. Taylor, Peter Stone

Reinforcement learning (RL) is a popular paradigm for addressing sequential decision tasks in which the agent has only limited environmental feedback.

reinforcement-learning Reinforcement Learning (RL) +1

Cannot find the paper you are looking for? You can Submit a new open access paper.