Search Results for author: George Konidaris

Found 60 papers, 18 papers with code

Verifiably Following Complex Robot Instructions with Foundation Models

no code implementations18 Feb 2024 Benedict Quartey, Eric Rosen, Stefanie Tellex, George Konidaris

We propose Language Instruction grounding for Motion Planning (LIMP), a system that leverages foundation models and temporal logics to generate instruction-conditioned semantic maps that enable robots to verifiably follow expressive and long-horizon instructions with open vocabulary referents and complex spatiotemporal constraints.

Motion Planning

Improved Inference of Human Intent by Combining Plan Recognition and Language Feedback

no code implementations3 Oct 2023 Ifrah Idrees, Tian Yun, Naveen Sharma, Yunxin Deng, Nakul Gopalan, George Konidaris, Stefanie Tellex

We propose a novel framework for plan and goal recognition in partially observable domains -- Dialogue for Goal Recognition (D4GR) enabling a robot to rectify its belief in human progress by asking clarification questions about noisy sensor data and sub-optimal human actions.

Exploiting Contextual Structure to Generate Useful Auxiliary Tasks

no code implementations9 Mar 2023 Benedict Quartey, Ankit Shah, George Konidaris

We propose an approach that maximizes experience reuse while learning to solve a given task by generating and simultaneously learning useful auxiliary tasks.

counterfactual Counterfactual Reasoning +2

On the Geometry of Reinforcement Learning in Continuous State and Action Spaces

no code implementations29 Dec 2022 Saket Tiwari, Omer Gottesman, George Konidaris

Central to our work is the idea that the transition dynamics induce a low dimensional manifold of reachable states embedded in the high-dimensional nominal state space.

reinforcement-learning Reinforcement Learning (RL)

Effects of Data Geometry in Early Deep Learning

no code implementations29 Dec 2022 Saket Tiwari, George Konidaris

Deep neural networks can approximate functions on different types of data, from images to graphs, with varied underlying structure.

Evaluation Beyond Task Performance: Analyzing Concepts in AlphaZero in Hex

1 code implementation26 Nov 2022 Charles Lovering, Jessica Zosa Forde, George Konidaris, Ellie Pavlick, Michael L. Littman

AlphaZero, an approach to reinforcement learning that couples neural networks and Monte Carlo tree search (MCTS), has produced state-of-the-art strategies for traditional board games like chess, Go, shogi, and Hex.

Board Games

Model-based Lifelong Reinforcement Learning with Bayesian Exploration

2 code implementations20 Oct 2022 Haotian Fu, Shangqun Yu, Michael Littman, George Konidaris

We propose a model-based lifelong reinforcement-learning approach that estimates a hierarchical Bayesian posterior distilling the common structure shared across different tasks.

reinforcement-learning Reinforcement Learning (RL)

Constrained Dynamic Movement Primitives for Safe Learning of Motor Skills

no code implementations28 Sep 2022 Seiji Shaw, Devesh K. Jha, Arvind Raghunathan, Radu Corcodel, Diego Romeres, George Konidaris, Daniel Nikovski

In this paper, we present constrained dynamic movement primitives (CDMP) which can allow for constraint satisfaction in the robot workspace.

RLang: A Declarative Language for Describing Partial World Knowledge to Reinforcement Learning Agents

no code implementations12 Aug 2022 Rafael Rodriguez-Sanchez, Benjamin A. Spiegel, Jennifer Wang, Roma Patel, Stefanie Tellex, George Konidaris

We define precise syntax and grounding semantics for RLang, and provide a parser that grounds RLang programs to an algorithm-agnostic \textit{partial} world model and policy that can be exploited by an RL agent.

Decision Making reinforcement-learning +2

Meta-Learning Parameterized Skills

1 code implementation7 Jun 2022 Haotian Fu, Shangqun Yu, Saket Tiwari, Michael Littman, George Konidaris

We propose a novel parameterized skill-learning algorithm that aims to learn transferable parameterized skills and synthesize them into a new action space that supports efficient learning in long-horizon tasks.

Meta-Learning Robot Manipulation

Learning Abstract and Transferable Representations for Planning

no code implementations4 May 2022 Steven James, Benjamin Rosman, George Konidaris

We propose a framework for autonomously learning state abstractions of an agent's environment, given a set of skills.

Adaptive Online Value Function Approximation with Wavelets

1 code implementation22 Apr 2022 Michael Beukman, Michael Mitchley, Dean Wookey, Steven James, George Konidaris

We further demonstrate that a fixed wavelet basis set performs comparably against the high-performing Fourier basis on Mountain Car and Acrobot, and that the adaptive methods provide a convenient approach to addressing an oversized initial basis set, while demonstrating performance comparable to, or greater than, the fixed wavelet basis.

Acrobot

Hierarchical Reinforcement Learning of Locomotion Policies in Response to Approaching Objects: A Preliminary Study

no code implementations20 Mar 2022 Shangqun Yu, Sreehari Rammohan, Kaiyu Zheng, George Konidaris

Animals such as rabbits and birds can instantly generate locomotion behavior in reaction to a dynamic, approaching object, such as a person or a rock, despite having possibly never seen the object before and having limited perception of the object's properties.

Hierarchical Reinforcement Learning Object +2

Coarse-Grained Smoothness for RL in Metric Spaces

no code implementations23 Oct 2021 Omer Gottesman, Kavosh Asadi, Cameron Allen, Sam Lobel, George Konidaris, Michael Littman

We propose a new coarse-grained smoothness definition that generalizes the notion of Lipschitz continuity, is more widely applicable, and allows us to compute significantly tighter bounds on Q-functions, leading to improved learning.

Decision Making

Guided Policy Search for Parameterized Skills using Adverbs

no code implementations23 Oct 2021 Benjamin A. Spiegel, George Konidaris

We present a method for using adverb phrases to adjust skill parameters via learned adverb-skill groundings.

Towards Optimal Correlational Object Search

1 code implementation19 Oct 2021 Kaiyu Zheng, Rohan Chitnis, Yoonchang Sung, George Konidaris, Stefanie Tellex

In realistic applications of object search, robots will need to locate target objects in complex environments while coping with unreliable sensors, especially for small or hard-to-detect objects.

Object

Learning to Infer Kinematic Hierarchies for Novel Object Instances

no code implementations15 Oct 2021 Hameed Abdul-Rashid, Miles Freeman, Ben Abbatematteo, George Konidaris, Daniel Ritchie

Manipulating an articulated object requires perceiving itskinematic hierarchy: its parts, how each can move, and howthose motions are coupled.

Instance Segmentation Object +1

Generalizing to New Domains by Mapping Natural Language to Lifted LTL

no code implementations11 Oct 2021 Eric Hsiung, Hiloni Mehta, Junchi Chu, Xinyu Liu, Roma Patel, Stefanie Tellex, George Konidaris

We compare our method of mapping natural language task specifications to intermediate contextual queries against state-of-the-art CopyNet models capable of translating natural language to LTL, by evaluating whether correct LTL for manipulation and navigation task specifications can be output, and show that our method outperforms the CopyNet model on unseen object references.

Bayesian Exploration for Lifelong Reinforcement Learning

no code implementations29 Sep 2021 Haotian Fu, Shangqun Yu, Michael Littman, George Konidaris

A central question in reinforcement learning (RL) is how to leverage prior knowledge to accelerate learning in new tasks.

reinforcement-learning Reinforcement Learning (RL)

Value-Based Reinforcement Learning for Continuous Control Robotic Manipulation in Multi-Task Sparse Reward Settings

no code implementations28 Jul 2021 Sreehari Rammohan, Shangqun Yu, Bowen He, Eric Hsiung, Eric Rosen, Stefanie Tellex, George Konidaris

Learning continuous control in high-dimensional sparse reward settings, such as robotic manipulation, is a challenging problem due to the number of samples often required to obtain accurate optimal value and policy estimates.

Continuous Control Data Augmentation +5

Learning Markov State Abstractions for Deep Reinforcement Learning

1 code implementation NeurIPS 2021 Cameron Allen, Neev Parikh, Omer Gottesman, George Konidaris

A fundamental assumption of reinforcement learning in Markov decision processes (MDPs) is that the relevant decision process is, in fact, Markov.

Continuous Control Contrastive Learning +2

Bootstrapping Motor Skill Learning with Motion Planning

no code implementations12 Jan 2021 Ben Abbatematteo, Eric Rosen, Stefanie Tellex, George Konidaris

We propose using kinematic motion planning as a completely autonomous, sample efficient way to bootstrap motor skill learning for object manipulation.

Motion Planning

Autonomous Learning of Object-Centric Abstractions for High-Level Planning

no code implementations ICLR 2022 Steven James, Benjamin Rosman, George Konidaris

Such representations can immediately be transferred between tasks that share the same types of objects, resulting in agents that require fewer samples to learn a model of a new task.

Object Vocal Bursts Intensity Prediction

Task Scoping: Generating Task-Specific Abstractions for Planning in Open-Scope Models

no code implementations17 Oct 2020 Michael Fishman, Nishanth Kumar, Cameron Allen, Natasha Danas, Michael Littman, Stefanie Tellex, George Konidaris

Unfortunately, planning to solve any specific task using an open-scope model is computationally intractable - even for state-of-the-art methods - due to the many states and actions that are necessarily present in the model but irrelevant to that problem.

Skill Discovery for Exploration and Planning using Deep Skill Graphs

no code implementations ICML Workshop LifelongML 2020 Akhil Bagaria, Jason Crowley, Jing Wei Nicholas Lim, George Konidaris

Temporal abstraction provides an opportunity to drastically lower the decision making burden facing reinforcement learning agents in rich sensorimotor spaces.

Continuous Control Decision Making +1

Visual Transfer for Reinforcement Learning via Wasserstein Domain Confusion

1 code implementation4 Jun 2020 Josh Roy, George Konidaris

We introduce Wasserstein Adversarial Proximal Policy Optimization (WAPPO), a novel algorithm for visual transfer in Reinforcement Learning that explicitly learns to align the distributions of extracted features between a source and target task.

reinforcement-learning Reinforcement Learning (RL)

Multi-Resolution POMDP Planning for Multi-Object Search in 3D

1 code implementation6 May 2020 Kaiyu Zheng, Yoonchang Sung, George Konidaris, Stefanie Tellex

Robots operating in households must find objects on shelves, under tables, and in cupboards.

Option Discovery using Deep Skill Chaining

1 code implementation ICLR 2020 Akhil Bagaria, George Konidaris

Autonomously discovering temporally extended actions, or skills, is a longstanding goal of hierarchical reinforcement learning.

Continuous Control Hierarchical Reinforcement Learning +2

Efficient Black-Box Planning Using Macro-Actions with Focused Effects

2 code implementations28 Apr 2020 Cameron Allen, Michael Katz, Tim Klinger, George Konidaris, Matthew Riemer, Gerald Tesauro

Focused macros dramatically improve black-box planning efficiency across a wide range of planning domains, sometimes beating even state-of-the-art planners with access to a full domain model.

Learning Deep Parameterized Skills from Demonstration for Re-targetable Visuomotor Control

2 code implementations23 Oct 2019 Jonathan Chang, Nishanth Kumar, Sean Hastings, Aaron Gokaslan, Diego Romeres, Devesh Jha, Daniel Nikovski, George Konidaris, Stefanie Tellex

We demonstrate that our model trained on 33% of the possible goals is able to generalize to more than 90% of the targets in the scene for both simulation and robot experiments.

Zero-Shot Policy Transfer with Disentangled Attention

no code implementations25 Sep 2019 Josh Roy, George Konidaris

In such settings, agents are trained in similar environments, such as simulators, and are then transferred to the original environment.

Domain Adaptation Reinforcement Learning (RL) +1

A Review of Robot Learning for Manipulation: Challenges, Representations, and Algorithms

no code implementations6 Jul 2019 Oliver Kroemer, Scott Niekum, George Konidaris

A key challenge in intelligent robotics is creating robots that are capable of directly interacting with the world around them to achieve their goals.

Robotics

Grounding Language Attributes to Objects using Bayesian Eigenobjects

no code implementations30 May 2019 Vanya Cohen, Benjamin Burchfiel, Thao Nguyen, Nakul Gopalan, Stefanie Tellex, George Konidaris

Our system is able to disambiguate between novel objects, observed via depth images, based on natural language descriptions.

3D Shape Representation Object

Learning Portable Representations for High-Level Planning

no code implementations ICML 2020 Steven James, Benjamin Rosman, George Konidaris

We present a framework for autonomously learning a portable representation that describes a collection of low-level continuous environments.

Vocal Bursts Intensity Prediction

Probabilistic Category-Level Pose Estimation via Segmentation and Predicted-Shape Priors

no code implementations28 May 2019 Benjamin Burchfiel, George Konidaris

We introduce a new method for category-level pose estimation which produces a distribution over predicted poses by integrating 3D shape estimates from a generative object model with segmentation information.

Object Pose Estimation +1

Finding Options that Minimize Planning Time

no code implementations16 Oct 2018 Yuu Jinnai, David Abel, D. Ellis Hershkowitz, Michael Littman, George Konidaris

We formalize the problem of selecting the optimal set of options for planning as that of computing the smallest set of options so that planning converges in less than a given maximum of value-iteration passes.

Policy and Value Transfer in Lifelong Reinforcement Learning

no code implementations ICML 2018 David Abel, Yuu Jinnai, Sophie Yue Guo, George Konidaris, Michael Littman

We consider the problem of how best to use prior experience to bootstrap lifelong learning, where an agent faces a series of task instances drawn from some task distribution.

reinforcement-learning Reinforcement Learning (RL)

Hybrid Bayesian Eigenobjects: Combining Linear Subspace and Deep Network Methods for 3D Robot Vision

no code implementations20 Jun 2018 Benjamin Burchfiel, George Konidaris

We introduce Hybrid Bayesian Eigenobjects (HBEOs), a novel representation for 3D objects designed to allow a robot to jointly estimate the pose, class, and full 3D geometry of a novel object observed from a single viewpoint in a single practical framework.

Object

Learning Multi-Level Hierarchies with Hindsight

4 code implementations4 Dec 2017 Andrew Levy, George Konidaris, Robert Platt, Kate Saenko

Hierarchical agents have the potential to solve sequential decision making tasks with greater sample efficiency than their non-hierarchical counterparts because hierarchical agents can break down tasks into sets of subtasks that only require short sequences of decisions.

Decision Making Hierarchical Reinforcement Learning

Robust and Efficient Transfer Learning with Hidden Parameter Markov Decision Processes

1 code implementation NeurIPS 2017 Taylor W. Killian, Samuel Daulton, George Konidaris, Finale Doshi-Velez

We introduce a new formulation of the Hidden Parameter Markov Decision Process (HiP-MDP), a framework for modeling families of related tasks using low-dimensional latent embeddings.

Transfer Learning

Active Exploration for Learning Symbolic Representations

no code implementations NeurIPS 2017 Garrett Andersen, George Konidaris

We introduce an online active exploration algorithm for data-efficiently learning an abstract symbolic model of an environment.

Mean Actor Critic

2 code implementations1 Sep 2017 Cameron Allen, Kavosh Asadi, Melrose Roderick, Abdel-rahman Mohamed, George Konidaris, Michael Littman

We propose a new algorithm, Mean Actor-Critic (MAC), for discrete-action continuous-state reinforcement learning.

Atari Games reinforcement-learning +1

Robust and Efficient Transfer Learning with Hidden-Parameter Markov Decision Processes

1 code implementation20 Jun 2017 Taylor Killian, Samuel Daulton, George Konidaris, Finale Doshi-Velez

We introduce a new formulation of the Hidden Parameter Markov Decision Process (HiP-MDP), a framework for modeling families of related tasks using low-dimensional latent embeddings.

Transfer Learning

Transfer Learning Across Patient Variations with Hidden Parameter Markov Decision Processes

no code implementations1 Dec 2016 Taylor Killian, George Konidaris, Finale Doshi-Velez

Due to physiological variation, patients diagnosed with the same condition may exhibit divergent, but related, responses to the same treatments.

Transfer Learning

Policy Evaluation Using the Ω-Return

no code implementations NeurIPS 2015 Philip S. Thomas, Scott Niekum, Georgios Theocharous, George Konidaris

The benefit of the Ω-return is that it accounts for the correlation of different length returns.

Constructing Abstraction Hierarchies Using a Skill-Symbol Loop

no code implementations25 Sep 2015 George Konidaris

We describe a framework for building abstraction hierarchies whereby an agent alternates skill- and representation-acquisition phases to construct a sequence of increasingly abstract Markov decision processes.

Reinforcement Learning with Parameterized Actions

3 code implementations5 Sep 2015 Warwick Masson, Pravesh Ranchod, George Konidaris

We introduce a model-free algorithm for learning in Markov decision processes with parameterized actions-discrete actions with continuous parameters.

reinforcement-learning Reinforcement Learning (RL)

TD_gamma: Re-evaluating Complex Backups in Temporal Difference Learning

no code implementations NeurIPS 2011 George Konidaris, Scott Niekum, Philip S. Thomas

We show that the lambda-return target used in the TD(lambda) family of algorithms is the maximum likelihood estimator for a specific model of how the variance of an n-step return estimate increases with n. We introduce the gamma-return estimator, an alternative target based on a more accurate model of variance, which defines the TD_gamma family of complex-backup temporal difference learning algorithms.

Constructing Skill Trees for Reinforcement Learning Agents from Demonstration Trajectories

no code implementations NeurIPS 2010 George Konidaris, Scott Kuindersma, Roderic Grupen, Andrew G. Barto

We demonstrate that CST constructs an appropriate skill tree that can be further refined through learning in a challenging continuous domain, and that it can be used to segment demonstration trajectories on a mobile manipulator into chains of skills where each skill is assigned an appropriate abstraction.

reinforcement-learning Reinforcement Learning (RL)

Skill Discovery in Continuous Reinforcement Learning Domains using Skill Chaining

no code implementations NeurIPS 2009 George Konidaris, Andrew G. Barto

We introduce skill chaining, a skill discovery method for reinforcement learning agents in continuous domains, that builds chains of skills leading to an end-of-task reward.

reinforcement-learning Reinforcement Learning (RL)

Cannot find the paper you are looking for? You can Submit a new open access paper.