no code implementations • 12 Jan 2024 • Wei Cao, Chang Luo, Biao Zhang, Matthias Nießner, Jiapeng Tang
To address these challenges, we introduce a diffusion model that explicitly learns the shape and motion distribution of non-rigid objects through an iterative denoising process of compressed latent representations.
no code implementations • 11 Jan 2024 • Barry Shichen Hu, Siyun Liang, Johannes Paetzold, Huy H. Nguyen, Isao Echizen, Jiapeng Tang
To avoid these limitations, we first unify the design choices in previous works and then propose a simplified Transformer-based model to extract richer and more robust geometric features for the surface normal estimation task.
no code implementations • 2 Dec 2023 • Jiapeng Tang, Angela Dai, Yinyu Nie, Lev Markhasin, Justus Thies, Matthias Niessner
We introduce Diffusion Parametric Head Models (DPHMs), a generative model that enables robust volumetric head reconstruction and tracking from monocular depth sequences.
no code implementations • 24 Mar 2023 • Jiapeng Tang, Yinyu Nie, Lev Markhasin, Angela Dai, Justus Thies, Matthias Nießner
We introduce a diffusion network to synthesize a collection of 3D indoor objects by denoising a set of unordered object attributes.
1 code implementation • 26 Jan 2023 • Biao Zhang, Jiapeng Tang, Matthias Niessner, Peter Wonka
We introduce 3DShape2VecSet, a novel shape representation for neural fields designed for generative diffusion models.
no code implementations • CVPR 2023 • Jiabao Lei, Jiapeng Tang, Kui Jia
More specifically, we maintain an intermediate surface mesh used for rendering new RGBD views, which subsequently becomes complete by an inpainting network; each rendered RGBD view is later back-projected as a partial surface and is supplemented into the intermediate mesh.
no code implementations • 11 Oct 2022 • Jiapeng Tang, Lev Markhasin, Bi Wang, Justus Thies, Matthias Nießner
To this end, we introduce transformer-based deformation networks that represent a shape deformation as a composition of local surface deformations.
no code implementations • 10 Feb 2022 • Xianggang Yu, Jiapeng Tang, Yipeng Qin, Chenghong Li, Linchao Bao, Xiaoguang Han, Shuguang Cui
We present PVSeRF, a learning framework that reconstructs neural radiance fields from single-view RGB images, for novel view synthesis.
1 code implementation • CVPR 2021 • Yi Fang, Jiapeng Tang, Wang Shen, Wei Shen, Xiao Gu, Li Song, Guangtao Zhai
In the third stage, we use the generated dual attention as guidance to perform two sub-tasks: (1) identifying whether the gaze target is inside or out of the image; (2) locating the target if inside.
1 code implementation • ICCV 2021 • Jiapeng Tang, Jiabao Lei, Dan Xu, Feiying Ma, Kui Jia, Lei Zhang
To this end, we propose to learn implicit surface reconstruction by sign-agnostic optimization of convolutional occupancy networks, to simultaneously achieve advanced scalability to large-scale scenes, generality to novel shapes, and applicability to raw scans in a unified framework.
1 code implementation • CVPR 2021 • Jiapeng Tang, Dan Xu, Kui Jia, Lei Zhang
This paper focuses on the task of 4D shape reconstruction from a sequence of point clouds.
1 code implementation • 13 Aug 2020 • Jiapeng Tang, Xiaoguang Han, Mingkui Tan, Xin Tong, Kui Jia
However, they all have their own drawbacks, and cannot properly reconstruct the surface shapes of complex topologies, arguably due to a lack of constraints on the topologicalstructures in their learning frameworks.
no code implementations • ICCV 2019 • Junyi Pan, Xiaoguang Han, Weikai Chen, Jiapeng Tang, Kui Jia
The key to our approach is a novel progressive shaping framework that alternates between mesh deformation and topology modification.
Ranked #3 on 3D Shape Reconstruction on Pix3D
1 code implementation • CVPR 2019 2019 • Jiapeng Tang, Xiaoguang Han, Junyi Pan, Kui Jia, Xin Tong
To this end, we propose in this paper a skeleton-bridged, stage-wise learning approach to address the challenge.
1 code implementation • CVPR 2019 • Jiapeng Tang, Xiaoguang Han, Junyi Pan, Kui Jia, Xin Tong
To this end, we propose in this paper a skeleton-bridged, stage-wise learning approach to address the challenge.