End-to-end control for robot manipulation and grasping is emerging as an attractive alternative to traditional pipelined approaches.
The IKEA Furniture Assembly Environment is one of the first benchmarks for testing and accelerating the automation of complex manipulation tasks.
Since product images are readily available for a wide range of objects (e. g., from the web), the system works out-of-the-box for novel objects without requiring any additional training data.
This paper presents a real-time, object-independent grasp synthesis method which can be used for closed-loop grasping.
Enter the RobotriX, an extremely photorealistic indoor dataset designed to enable the application of deep learning techniques to a wide variety of robotic vision problems.
Camera viewpoint selection is an important aspect of visual grasp detection, especially in clutter where many occlusions are present.
The Amazon Robotics Challenge enlisted sixteen teams to each design a pick-and-place robot for autonomous warehousing, addressing development in robotic vision and manipulation.
In this paper, we explore deep reinforcement learning algorithms for vision-based robotic grasping.