Search Results for author: Xinlin Ren

Found 5 papers, 3 papers with code

MVSFormer++: Revealing the Devil in Transformer's Details for Multi-View Stereo

1 code implementation22 Jan 2024 Chenjie Cao, Xinlin Ren, Yanwei Fu

Recent advancements in learning-based Multi-View Stereo (MVS) methods have prominently featured transformer-based models with attention mechanisms.

3D Reconstruction Depth Estimation +1

Rethinking the Multi-view Stereo from the Perspective of Rendering-based Augmentation

no code implementations11 Mar 2023 Chenjie Cao, Xinlin Ren, xiangyang xue, Yanwei Fu

To address these problems, we first apply one of the state-of-the-art learning-based MVS methods, --MVSFormer, to overcome intractable scenarios such as textureless and reflections regions suffered by traditional PatchMatch methods, but it fails in a few large scenes' reconstructions.

LoRD: Local 4D Implicit Representation for High-Fidelity Dynamic Human Modeling

1 code implementation18 Aug 2022 Boyan Jiang, Xinlin Ren, Mingsong Dou, xiangyang xue, Yanwei Fu, yinda zhang

Recent progress in 4D implicit representation focuses on globally controlling the shape and motion with low dimensional latent vectors, which is prone to missing surface details and accumulating tracking error.

3D Shape Modeling 4D reconstruction +1

MVSFormer: Multi-View Stereo by Learning Robust Image Features and Temperature-based Depth

1 code implementation4 Aug 2022 Chenjie Cao, Xinlin Ren, Yanwei Fu

In this paper, we propose a pre-trained ViT enhanced MVS network called MVSFormer, which can learn more reliable feature representations benefited by informative priors from ViT.

3D Reconstruction Point Clouds +1

Density-preserving Deep Point Cloud Compression

no code implementations CVPR 2022 Yun He, Xinlin Ren, Danhang Tang, yinda zhang, xiangyang xue, Yanwei Fu

To address this, we propose a novel deep point cloud compression method that preserves local density information.

Cannot find the paper you are looking for? You can Submit a new open access paper.