Robot Task Planning
5 papers with code • 1 benchmarks • 2 datasets
Most implemented papers
The CoSTAR Block Stacking Dataset: Learning with Workspace Constraints
We show that a mild relaxation of the task and workspace constraints implicit in existing object grasping datasets can cause neural network based grasping algorithms to fail on even a simple block stacking task when executed under more realistic circumstances.
3D Dynamic Scene Graphs: Actionable Spatial Perception with Places, Objects, and Humans
Our second contribution is to provide the first fully automatic Spatial PerceptIon eNgine(SPIN) to build a DSG from visual-inertial data.
Visual Robot Task Planning
In this work, we propose a neural network architecture and associated planning algorithm that (1) learns a representation of the world useful for generating prospective futures after the application of high-level actions, (2) uses this generative model to simulate the result of sequences of high-level actions in a variety of environments, and (3) uses this same representation to evaluate these actions and perform tree search to find a sequence of high-level actions in a new environment.
PackIt: A Virtual Environment for Geometric Planning
The ability to jointly understand the geometry of objects and plan actions for manipulating them is crucial for intelligent agents.
CaTGrasp: Learning Category-Level Task-Relevant Grasping in Clutter from Simulation
This work proposes a framework to learn task-relevant grasping for industrial objects without the need of time-consuming real-world data collection or manual annotation.