2 code implementations • 7 Apr 2024 • Hyeongjin Nam, Daniel Sungho Jung, Gyeongsik Moon, Kyoung Mu Lee
As a result, our CONTHO achieves state-of-the-art performance in both human-object contact estimation and joint reconstruction of 3D human and object.
Ranked #1 on Contact Detection on BEHAVE
no code implementations • 10 Jan 2024 • Zhaoxi Chen, Gyeongsik Moon, Kaiwen Guo, Chen Cao, Stanislav Pidhorskyi, Tomas Simon, Rohan Joshi, Yuan Dong, Yichen Xu, Bernardo Pires, He Wen, Lucas Evans, Bo Peng, Julia Buffalini, Autumn Trimble, Kevyn McPhail, Melissa Schoeller, Shoou-I Yu, Javier Romero, Michael Zollhöfer, Yaser Sheikh, Ziwei Liu, Shunsuke Saito
To simplify the personalization process while retaining photorealism, we build a powerful universal relightable prior based on neural relighting from multi-view images of hands captured in a light stage with hundreds of identities.
1 code implementation • 5 Sep 2023 • JoonKyu Park, Daniel Sungho Jung, Gyeongsik Moon, Kyoung Mu Lee
Our two novel tokens are from a combination of separated two hand features; hence, it is much more robust to the distant token problem.
Ranked #1 on 3D Interacting Hand Pose Estimation on InterHand2.6M
1 code implementation • 10 Apr 2023 • Gyeongsik Moon, Hongsuk Choi, Sanghyuk Chun, Jiyoung Lee, Sangdoo Yun
Recovering 3D human mesh in the wild is greatly challenging as in-the-wild (ITW) datasets provide only 2D pose ground truths (GTs).
Ranked #6 on 3D Multi-Person Pose Estimation on MuPoTS-3D
1 code implementation • CVPR 2023 • Yeonguk Oh, JoonKyu Park, Jaeha Kim, Gyeongsik Moon, Kyoung Mu Lee
In addition to the new dataset, we propose BlurHandNet, a baseline network for accurate 3D hand mesh recovery from a blurry hand image.
1 code implementation • CVPR 2023 • Gyeongsik Moon
Hence, interacting hands of MoCap datasets are brought to the 2D scale space of single hands of ITW datasets.
no code implementations • 9 Mar 2023 • Hongsuk Choi, Hyeongjin Nam, Taeryung Lee, Gyeongsik Moon, Kyoung Mu Lee
Recently, a few self-supervised representation learning (SSL) methods have outperformed the ImageNet classification pre-training for vision tasks such as object detection.
1 code implementation • 12 Dec 2022 • Taeryung Lee, Gyeongsik Moon, Kyoung Mu Lee
The action-conditioned methods generate a sequence of motion from a single action.
no code implementations • 2 Oct 2022 • Hongsuk Choi, Gyeongsik Moon, Matthieu Armando, Vincent Leroy, Kyoung Mu Lee, Gregory Rogez
Existing neural human rendering methods struggle with a single image input due to the lack of information in invisible areas and the depth ambiguity of pixels in visible areas.
1 code implementation • 20 Jul 2022 • Gyeongsik Moon, Hyeongjin Nam, Takaaki Shiratori, Kyoung Mu Lee
Although much progress has been made in 3D clothed human reconstruction, most of the existing methods fail to produce robust results from in-the-wild images, which contain diverse human poses and appearances.
no code implementations • CVPR 2022 • JoonKyu Park, Yeonguk Oh, Gyeongsik Moon, Hongsuk Choi, Kyoung Mu Lee
However, we argue that occluded regions have strong correlations with hands so that they can provide highly beneficial information for complete 3D hand mesh estimation.
Ranked #5 on 3D Hand Pose Estimation on DexYCB
1 code implementation • CVPR 2022 • Hongsuk Choi, Gyeongsik Moon, JoonKyu Park, Kyoung Mu Lee
Second, we propose a joint-based regressor that distinguishes a target person's feature from others.
Ranked #10 on 3D Multi-Person Pose Estimation on MuPoTS-3D
2D Human Pose Estimation 3D Multi-Person Human Pose Estimation +1
5 code implementations • 23 Nov 2020 • Gyeongsik Moon, Hongsuk Choi, Kyoung Mu Lee
Assuming no 3D pseudo-GTs are available, NeuralAnnot is weakly supervised with GT 2D/3D joint coordinates of training sets.
1 code implementation • 23 Nov 2020 • Gyeongsik Moon, Hongsuk Choi, Kyoung Mu Lee
Using Pose2Pose, Hand4Whole utilizes hand MCP joint features to predict 3D wrists as MCP joints largely contribute to 3D wrist rotations in the human kinematic chain.
1 code implementation • CVPR 2021 • Hongsuk Choi, Gyeongsik Moon, Ju Yong Chang, Kyoung Mu Lee
Our TCMR significantly outperforms previous video-based methods in temporal consistency with better per-frame 3D pose and shape accuracy.
Ranked #58 on 3D Human Pose Estimation on 3DPW
2 code implementations • ECCV 2020 • Gyeongsik Moon, Shoou-I Yu, He Wen, Takaaki Shiratori, Kyoung Mu Lee
Therefore, we firstly propose (1) a large-scale dataset, InterHand2. 6M, and (2) a baseline network, InterNet, for 3D interacting hand pose estimation from a single RGB image.
Ranked #8 on 3D Interacting Hand Pose Estimation on InterHand2.6M
2 code implementations • ECCV 2020 • Hongsuk Choi, Gyeongsik Moon, Kyoung Mu Lee
Most of the recent deep learning-based 3D human pose and mesh estimation methods regress the pose and shape parameters of human mesh models, such as SMPL and MANO, from an input image.
Ranked #6 on 3D Hand Pose Estimation on FreiHAND
1 code implementation • ECCV 2020 • Gyeongsik Moon, Takaaki Shiratori, Kyoung Mu Lee
We design our system to be trained in an end-to-end and weakly-supervised manner; therefore, it does not require groundtruth meshes.
1 code implementation • ECCV 2020 • Gyeongsik Moon, Kyoung Mu Lee
Most of the previous image-based 3D human pose and mesh estimation methods estimate parameters of the human mesh model from an input image.
Ranked #5 on 3D Hand Pose Estimation on FreiHAND
2 code implementations • 13 Jul 2020 • Gyeongsik Moon, Heeseung Kwon, Kyoung Mu Lee, Minsu Cho
Most current action recognition methods heavily rely on appearance information by taking an RGB sequence of entire image regions as input.
1 code implementation • 26 Oct 2019 • Ju Yong Chang, Gyeongsik Moon, Kyoung Mu Lee
This study presents a new network (i. e., PoseLifter) that can lift a 2D human pose to an absolute 3D pose in a camera coordinate system.
Ranked #49 on 3D Human Pose Estimation on MPI-INF-3DHP (PCK metric)
4 code implementations • ICCV 2019 • Gyeongsik Moon, Ju Yong Chang, Kyoung Mu Lee
Although significant improvement has been achieved recently in 3D human pose estimation, most of the previous methods only treat a single-person case.
Ranked #1 on Monocular 3D Human Pose Estimation on Human3.6M (Use Video Sequence metric)
no code implementations • 10 May 2019 • Gyeongsik Moon, Ju Yong Chang, Kyoung Mu Lee
Multi-person pose estimation from a 2D image is challenging because it requires not only keypoint localization but also human detection.
1 code implementation • CVPR 2019 • Gyeongsik Moon, Ju Yong Chang, Kyoung Mu Lee
In this paper, we propose a human pose refinement network that estimates a refined pose from a tuple of an input image and input pose.
Ranked #2 on Multi-Person Pose Estimation on MS COCO (Validation AP metric)
1 code implementation • CVPR 2018 • Shanxin Yuan, Guillermo Garcia-Hernando, Bjorn Stenger, Gyeongsik Moon, Ju Yong Chang, Kyoung Mu Lee, Pavlo Molchanov, Jan Kautz, Sina Honari, Liuhao Ge, Junsong Yuan, Xinghao Chen, Guijin Wang, Fan Yang, Kai Akiyama, Yang Wu, Qingfu Wan, Meysam Madadi, Sergio Escalera, Shile Li, Dongheui Lee, Iason Oikonomidis, Antonis Argyros, Tae-Kyun Kim
Official Torch7 implementation of "V2V-PoseNet: Voxel-to-Voxel Prediction Network for Accurate 3D Hand and Human Pose Estimation from a Single Depth Map", CVPR 2018
Ranked #5 on Hand Pose Estimation on HANDS 2017
5 code implementations • CVPR 2018 • Gyeongsik Moon, Ju Yong Chang, Kyoung Mu Lee
To overcome these weaknesses, we firstly cast the 3D hand and human pose estimation problem from a single depth map into a voxel-to-voxel prediction that uses a 3D voxelized grid and estimates the per-voxel likelihood for each keypoint.
Ranked #3 on Pose Estimation on ITOP front-view
no code implementations • 15 Jun 2017 • Gyeongsik Moon, Ju Yong Chang, Yumin Suh, Kyoung Mu Lee
We propose a novel approach to 3D human pose estimation from a single depth map.