Search Results for author: Changhyun Choi

Found 8 papers, 1 papers with code

A Deep Learning Approach to Grasping the Invisible

1 code implementation11 Sep 2019 Yang Yang, Hengyue Liang, Changhyun Choi

The target-oriented motion critic, which maps both visual observations and target information to the expected future rewards of pushing and grasping motion primitives, is learned via deep Q-learning.

Q-Learning

Learning to Generate 6-DoF Grasp Poses with Reachability Awareness

no code implementations14 Oct 2019 Xibai Lou, Yang Yang, Changhyun Choi

Motivated by the stringent requirements of unstructured real-world where a plethora of unknown objects reside in arbitrary locations of the surface, we propose a voxel-based deep 3D Convolutional Neural Network (3D CNN) that generates feasible 6-DoF grasp poses in unrestricted workspace with reachability awareness.

Collision-Aware Target-Driven Object Grasping in Constrained Environments

no code implementations1 Apr 2021 Xibai Lou, Yang Yang, Changhyun Choi

Grasping a novel target object in constrained environments (e. g., walls, bins, and shelves) requires intensive reasoning about grasp pose reachability to avoid collisions with the surrounding structures.

Object Robotic Grasping

Attribute-Based Robotic Grasping with One-Grasp Adaptation

no code implementations6 Apr 2021 Yang Yang, YuanHao Liu, Hengyue Liang, Xibai Lou, Changhyun Choi

In this work, we introduce an end-to-end learning method of attribute-based robotic grasping with one-grasp adaptation capability.

Attribute Object +1

KINet: Unsupervised Forward Models for Robotic Pushing Manipulation

no code implementations18 Feb 2022 Alireza Rezazadeh, Changhyun Choi

Using visual observations, our model learns to associate objects with keypoint coordinates and discovers a graph representation of the system as a set of keypoint embeddings and their relations.

Object

Cannot find the paper you are looking for? You can Submit a new open access paper.