Search Results for author: Changhyun Choi

Found 6 papers, 1 papers with code

KINet: Keypoint Interaction Networks for Unsupervised Forward Modeling

no code implementations18 Feb 2022 Alireza Rezazadeh, Changhyun Choi

By learning to perform physical reasoning in the keypoint space, our model automatically generalizes to scenarios with a different number of objects, and novel object geometries.

Attribute-Based Robotic Grasping with One-Grasp Adaptation

no code implementations6 Apr 2021 Yang Yang, YuanHao Liu, Hengyue Liang, Xibai Lou, Changhyun Choi

In this work, we introduce an end-to-end learning method of attribute-based robotic grasping with one-grasp adaptation capability.

Robotic Grasping

Collision-Aware Target-Driven Object Grasping in Constrained Environments

no code implementations1 Apr 2021 Xibai Lou, Yang Yang, Changhyun Choi

Grasping a novel target object in constrained environments (e. g., walls, bins, and shelves) requires intensive reasoning about grasp pose reachability to avoid collisions with the surrounding structures.

Robotic Grasping

Learning to Generate 6-DoF Grasp Poses with Reachability Awareness

no code implementations14 Oct 2019 Xibai Lou, Yang Yang, Changhyun Choi

Motivated by the stringent requirements of unstructured real-world where a plethora of unknown objects reside in arbitrary locations of the surface, we propose a voxel-based deep 3D Convolutional Neural Network (3D CNN) that generates feasible 6-DoF grasp poses in unrestricted workspace with reachability awareness.

A Deep Learning Approach to Grasping the Invisible

1 code implementation11 Sep 2019 Yang Yang, Hengyue Liang, Changhyun Choi

The target-oriented motion critic, which maps both visual observations and target information to the expected future rewards of pushing and grasping motion primitives, is learned via deep Q-learning.

Q-Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.