Learning Rational Skills for Planning from Demonstrations and Instructions

We present a framework for learning compositional, rational skill models (RatSkills) that support efficient planning and inverse planning for achieving novel goals and recognizing activities. In contrast to directly learning a set of policies that maps states to actions, in RatSkills, we represent each skill as a subgoal and can be executed based on a planning subroutine. RatSkills can be learned by observing expert demonstrations and reading abstract language descriptions of thecorresponding task (e.g.,collect wood then craft a boat then go across the river).The learned subgoal-based representation enables inference of another agent’s intended task from their actions via Bayesian inverse planning. It also supports planning for novel objectives given in the form of either temporal task descriptions or black-box goal tests. We demonstrate through experiments in both discrete and continuous domains that our learning algorithms recover a set of RatSkills by observing and explaining other agents’ movements, and plan efficiently for novel goals by composing learned skills.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here