Search Results for author: Seongyeong Lee

Found 4 papers, 1 papers with code

Transformer-Based Unified Recognition of Two Hands Manipulating Objects

1 code implementation CVPR 2023 Hoseong Cho, Chanwoo Kim, Jihyeon Kim, Seongyeong Lee, Elkhan Ismayilzada, Seungryul Baek

In our framework, we insert the whole image depicting two hands, an object and their interactions as input and jointly estimate 3 information from each frame: poses of two hands, pose of an object and object types.

Object

HOReeNet: 3D-aware Hand-Object Grasping Reenactment

no code implementations11 Nov 2022 Changhwa Lee, Junuk Cha, Hansol Lee, Seongyeong Lee, Donguk Kim, Seungryul Baek

At the same time, to obtain high-quality 2D images from 3D space, well-designed 3D-to-2D projection and image refinement are required.

3D Reconstruction Object

Image-free Domain Generalization via CLIP for 3D Hand Pose Estimation

no code implementations30 Oct 2022 Seongyeong Lee, Hansoo Park, Dong Uk Kim, Jihyeon Kim, Muhammadjon Boboev, Seungryul Baek

The manipulated image features are then exploited to train the hand pose estimation network via the contrastive learning framework.

3D Hand Pose Estimation Contrastive Learning +1

Transformer-based Global 3D Hand Pose Estimation in Two Hands Manipulating Objects Scenarios

no code implementations20 Oct 2022 Hoseong Cho, Donguk Kim, Chanwoo Kim, Seongyeong Lee, Seungryul Baek

In this challenge, we aim to estimate global 3D hand poses from the input image where two hands and an object are interacting on the egocentric viewpoint.

3D Hand Pose Estimation

Cannot find the paper you are looking for? You can Submit a new open access paper.