Robotic Grasping

51 papers with code • 3 benchmarks • 12 datasets

This task is composed of using Deep Learning to identify how best to grasp objects using robotic arms in different scenarios. This is a very complex task as it might involve dynamic environments and objects unknown to the network.

Most implemented papers

Antipodal Robotic Grasping using Generative Residual Convolutional Neural Network

skumra/robotic-grasping 11 Sep 2019

In this paper, we present a modular robotic system to tackle the problem of generating and performing antipodal robotic grasps for unknown objects from n-channel image of the scene.

Robotic Pick-and-Place of Novel Objects in Clutter with Multi-Affordance Grasping and Cross-Domain Image Matching

andyzeng/arc-robot-vision 3 Oct 2017

Since product images are readily available for a wide range of objects (e. g., from the web), the system works out-of-the-box for novel objects without requiring any additional training data.

The CoSTAR Block Stacking Dataset: Learning with Workspace Constraints

jhu-lcsr/costar_plan 27 Oct 2018

We show that a mild relaxation of the task and workspace constraints implicit in existing object grasping datasets can cause neural network based grasping algorithms to fail on even a simple block stacking task when executed under more realistic circumstances.

Real-Time Grasp Detection Using Convolutional Neural Networks

DucTranVan/grasp-detection-pytorch 9 Dec 2014

We present an accurate, real-time approach to robotic grasp detection based on convolutional neural networks.

Real-world multiobject, multigrasp detection

ivalab/grasp_multiObject IEEE ROBOTICS AND AUTOMATION LETTERS 2018

A deep learning architecture is proposed to predict graspable locations for robotic manipulation.

PyRobot: An Open-source Robotics Framework for Research and Benchmarking

facebookresearch/pyrobot 19 Jun 2019

This paper introduces PyRobot, an open-source robotics framework for research and benchmarking.

Learning Object Placements For Relational Instructions by Hallucinating Scene Representations

mees/AIS-Alexa-Robot 23 Jan 2020

One particular requirement for such robots is that they are able to understand spatial relations and can place objects in accordance with the spatial relations expressed by their user.

Grasping Field: Learning Implicit Representations for Human Grasps

korrawe/grasping_field_demo 10 Aug 2020

Specifically, our generative model is able to synthesize high-quality human grasps, given only on a 3D object point cloud.

Composing Pick-and-Place Tasks By Grounding Language

mees/AIS-Alexa-Robot 16 Feb 2021

Controlling robots to perform tasks via natural language is one of the most challenging topics in human-robot interaction.

The Role of Tactile Sensing in Learning and Deploying Grasp Refinement Algorithms

axkoenig/grasp_refinement 23 Sep 2021

Our first experiment investigates the need for rich tactile sensing in the rewards of RL-based grasp refinement algorithms for multi-fingered robotic hands.