no code implementations • 2 Apr 2024 • Ziqian Bai, Feitong Tan, Sean Fanello, Rohit Pandey, Mingsong Dou, Shichen Liu, Ping Tan, yinda zhang
To address these challenges, we propose a novel fast 3D neural implicit head avatar model that achieves real-time rendering while maintaining fine-grained controllability and high rendering quality.
no code implementations • 19 Feb 2024 • Zhixuan Yu, Ziqian Bai, Abhimitra Meka, Feitong Tan, Qiangeng Xu, Rohit Pandey, Sean Fanello, Hyun Soo Park, yinda zhang
Traditional methods for constructing high-quality, personalized head avatars from monocular videos demand extensive face captures and training time, posing a significant challenge for scalability.
no code implementations • 11 Jan 2024 • Peng Dai, Feitong Tan, Xin Yu, yinda zhang, Xiaojuan Qi
To this end, we propose a new method, GO-NeRF, capable of utilizing scene context for high-quality and harmonious 3D object generation within an existing NeRF.
no code implementations • 8 Dec 2023 • Zhen Wang, Qiangeng Xu, Feitong Tan, Menglei Chai, Shichen Liu, Rohit Pandey, Sean Fanello, Achuta Kadambi, yinda zhang
State-of-the-art results from extensive experiments demonstrate MVDD's excellent ability in 3D shape generation, depth completion, and its potential as a 3D prior for downstream tasks.
no code implementations • 5 Dec 2023 • Yushi Lan, Feitong Tan, Di Qiu, Qiangeng Xu, Kyle Genova, Zeng Huang, Sean Fanello, Rohit Pandey, Thomas Funkhouser, Chen Change Loy, yinda zhang
We present a novel framework for generating photorealistic 3D human head and subsequently manipulating and reposing them with remarkable flexibility.
no code implementations • CVPR 2023 • Ziqian Bai, Feitong Tan, Zeng Huang, Kripasindhu Sarkar, Danhang Tang, Di Qiu, Abhimitra Meka, Ruofei Du, Mingsong Dou, Sergio Orts-Escolano, Rohit Pandey, Ping Tan, Thabo Beeler, Sean Fanello, yinda zhang
The learnt avatar is driven by a parametric face model to achieve user-controlled facial expressions and head poses.
no code implementations • 13 Jan 2022 • Feitong Tan, Sean Fanello, Abhimitra Meka, Sergio Orts-Escolano, Danhang Tang, Rohit Pandey, Jonathan Taylor, Ping Tan, yinda zhang
We propose VoLux-GAN, a generative framework to synthesize 3D-aware faces with convincing relighting.
1 code implementation • CVPR 2021 • Feitong Tan, Danhang Tang, Mingsong Dou, Kaiwen Guo, Rohit Pandey, Cem Keskin, Ruofei Du, Deqing Sun, Sofien Bouaziz, Sean Fanello, Ping Tan, yinda zhang
In this paper, we address the problem of building dense correspondences between human images under arbitrary camera viewpoints and body poses.
1 code implementation • CVPR 2020 • Feitong Tan, Hao Zhu, Zhaopeng Cui, Siyu Zhu, Marc Pollefeys, Ping Tan
Previous methods on estimating detailed human depth often require supervised training with `ground truth' depth data.
4 code implementations • CVPR 2020 • Xiaodong Gu, Zhiwen Fan, Zuozhuo Dai, Siyu Zhu, Feitong Tan, Ping Tan
The deep multi-view stereo (MVS) and stereo matching approaches generally construct 3D cost volumes to regularize and regress the output depth or disparity.
Ranked #12 on Point Clouds on Tanks and Temples
1 code implementation • ICCV 2019 • Sicong Tang, Feitong Tan, Kelvin Cheng, Zhaoyang Li, Siyu Zhu, Ping Tan
To achieve this goal, we separate the depth map into a smooth base shape and a residual detail shape and design a network with two branches to regress them respectively.
no code implementations • CVPR 2018 • Luwei Yang, Feitong Tan, Ao Li, Zhaopeng Cui, Yasutaka Furukawa, Ping Tan
This paper presents a novel polarimetric dense monocular SLAM (PDMS) algorithm based on a polarization camera.