Search Results for author: Ruinian Xu

Found 7 papers, 5 papers with code

Keypoint-GraspNet: Keypoint-based 6-DoF Grasp Generation from the Monocular RGB-D input

1 code implementation19 Sep 2022 Yiye Chen, Yunzhi Lin, Ruinian Xu, Patricio Vela

Great success has been achieved in the 6-DoF grasp learning from the point cloud input, yet the computational cost due to the point set orderlessness remains a concern.

Grasp Generation

GKNet: grasp keypoint network for grasp candidates detection

no code implementations16 Jun 2021 Ruinian Xu, Fu-Jen Chu, Patricio A. Vela

Decreasing the detection difficulty by grouping keypoints into pairs boosts performance.

Keypoint Detection

A Joint Network for Grasp Detection Conditioned on Natural Language Commands

no code implementations1 Apr 2021 Yiye Chen, Ruinian Xu, Yunzhi Lin, Patricio A. Vela

We consider the task of grasping a target object based on a natural language command query.


Recognizing Object Affordances to Support Scene Reasoning for Manipulation Tasks

1 code implementation12 Sep 2019 Fu-Jen Chu, Ruinian Xu, Chao Tang, Patricio A. Vela

Unfortunately, the top performing affordance recognition methods use object category priors to boost the accuracy of affordance detection and segmentation.

Affordance Detection Affordance Recognition +3

Deep Grasp: Detection and Localization of Grasps with Deep Neural Networks

4 code implementations1 Feb 2018 Fu-Jen Chu, Ruinian Xu, Patricio A. Vela

By defining the learning problem to be classification with null hypothesis competition instead of regression, the deep neural network with RGB-D image input predicts multiple grasp candidates for a single object or multiple objects, in a single shot.


Cannot find the paper you are looking for? You can Submit a new open access paper.