Vision-Based Multi-Task Manipulation for Inexpensive Robots Using End-To-End Learning from Demonstration

10 Jul 2017Rouhollah RahmatizadehPooya AbolghasemiLadislau BölöniSergey Levine

We propose a technique for multi-task learning from demonstration that trains the controller of a low-cost robotic arm to accomplish several complex picking and placing tasks, as well as non-prehensile manipulation. The controller is a recurrent neural network using raw images as input and generating robot arm trajectories, with the parameters shared across the tasks... (read more)

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.