1 code implementation • ECCV 2020 • Youngjoong Kwon, Stefano Petrangeli, Dahun Kim, Haoliang Wang, Eunbyung Park, Viswanathan Swaminathan, Henry Fuchs
Second, we introduce a novel loss to explicitly enforce consistency across generated views both in space and in time.
no code implementations • 10 Apr 2023 • Youngjoong Kwon, Dahun Kim, Duygu Ceylan, Henry Fuchs
We present a method that enables synthesizing novel views and novel poses of arbitrary human performers from sparse multi-view images.
no code implementations • 3 Apr 2023 • Shengze Wang, Ziheng Wang, Ryan Schmelzle, Liujie Zheng, Youngjoong Kwon, Soumyadip Sengupta, Henry Fuchs
In this paper, we work to bring telepresence to every desktop.
no code implementations • 22 Apr 2022 • Shengze Wang, Youngjoong Kwon, Yuan Shen, Qian Zhang, Andrei State, Jia-Bin Huang, Henry Fuchs
Experiments on the HTI dataset show that our method outperforms the baseline per-frame image fidelity and spatial-temporal consistency.
1 code implementation • NeurIPS 2021 • Youngjoong Kwon, Dahun Kim, Duygu Ceylan, Henry Fuchs
To tackle this, we propose Neural Human Performer, a novel approach that learns generalizable neural radiance fields based on a parametric human body model for robust performance capture.
Ranked #3 on Generalizable Novel View Synthesis on ZJU-MoCap