Search Results for author: Shih-Yao Lin

Found 7 papers, 2 papers with code

TransFusion: Cross-view Fusion with Transformer for 3D Human Pose Estimation

1 code implementation18 Oct 2021 Haoyu Ma, Liangjian Chen, Deying Kong, Zhe Wang, Xingwei Liu, Hao Tang, Xiangyi Yan, Yusheng Xie, Shih-Yao Lin, Xiaohui Xie

The 3D position encoding guided by the epipolar field provides an efficient way of encoding correspondences between pixels of different views.

Ranked #20 on 3D Human Pose Estimation on Human3.6M (using extra training data)

3D Human Pose Estimation 3D Pose Estimation

MVHM: A Large-Scale Multi-View Hand Mesh Benchmark for Accurate 3D Hand Pose Estimation

no code implementations6 Dec 2020 Liangjian Chen, Shih-Yao Lin, Yusheng Xie, Yen-Yu Lin, Xiaohui Xie

Based on the match algorithm, we propose an efficient pipeline to generate a large-scale multi-view hand mesh (MVHM) dataset with accurate 3D hand mesh and joint labels.

3D Hand Pose Estimation

Temporal-Aware Self-Supervised Learning for 3D Hand Pose and Mesh Estimation in Videos

no code implementations6 Dec 2020 Liangjian Chen, Shih-Yao Lin, Yusheng Xie, Yen-Yu Lin, Xiaohui Xie

Experiments show that our modelachieves surprisingly good results, with 3D estimation ac-curacy on par with the state-of-the-art models trained with3D annotations, highlighting the benefit of the temporalconsistency in constraining 3D prediction models.

Pose Estimation Self-Supervised Learning

DGGAN: Depth-image Guided Generative Adversarial Networks for Disentangling RGB and Depth Images in 3D Hand Pose Estimation

no code implementations6 Dec 2020 Liangjian Chen, Shih-Yao Lin, Yusheng Xie, Yen-Yu Lin, Wei Fan, Xiaohui Xie

Estimating3D hand poses from RGB images is essentialto a wide range of potential applications, but is challengingowing to substantial ambiguity in the inference of depth in-formation from RGB images.

3D Hand Pose Estimation Generative Adversarial Network

Cannot find the paper you are looking for? You can Submit a new open access paper.