Motion Policy Networks

21 Oct 2022  ·  Adam Fishman, Adithyavairan Murali, Clemens Eppner, Bryan Peele, Byron Boots, Dieter Fox ·

Collision-free motion generation in unknown environments is a core building block for robot manipulation. Generating such motions is challenging due to multiple objectives; not only should the solutions be optimal, the motion generator itself must be fast enough for real-time performance and reliable enough for practical deployment. A wide variety of methods have been proposed ranging from local controllers to global planners, often being combined to offset their shortcomings. We present an end-to-end neural model called Motion Policy Networks (M$\pi$Nets) to generate collision-free, smooth motion from just a single depth camera observation. M$\pi$Nets are trained on over 3 million motion planning problems in over 500,000 environments. Our experiments show that M$\pi$Nets are significantly faster than global planners while exhibiting the reactivity needed to deal with dynamic scenes. They are 46% better than prior neural planners and more robust than local control policies. Despite being only trained in simulation, M$\pi$Nets transfer well to the real robot with noisy partial point clouds. Code and data are publicly available at https://mpinets.github.io.

PDF Abstract

Datasets


Introduced in the Paper:

Motion Policy Networks

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here