End-to-end control for robot manipulation and grasping is emerging as an attractive alternative to traditional pipelined approaches.
Since product images are readily available for a wide range of objects (e. g., from the web), the system works out-of-the-box for novel objects without requiring any additional training data.
This paper presents a real-time, object-independent grasp synthesis method which can be used for closed-loop grasping.
Enter the RobotriX, an extremely photorealistic indoor dataset designed to enable the application of deep learning techniques to a wide variety of robotic vision problems.
The Amazon Robotics Challenge enlisted sixteen teams to each design a pick-and-place robot for autonomous warehousing, addressing development in robotic vision and manipulation.
Camera viewpoint selection is an important aspect of visual grasp detection, especially in clutter where many occlusions are present.