Search Results for author: Jingliang Li

Found 6 papers, 0 papers with code

ES-MVSNet: Efficient Framework for End-to-end Self-supervised Multi-View Stereo

no code implementations4 Aug 2023 Qiang Zhou, Chaohui Yu, Jingliang Li, Yuang Liu, Jing Wang, Zhibin Wang

to provide additional consistency constraints, which grows GPU memory consumption and complicates the model's structure and training pipeline.

Optical Flow Estimation Semantic Segmentation

Improved Neural Radiance Fields Using Pseudo-depth and Fusion

no code implementations27 Jul 2023 Jingliang Li, Qiang Zhou, Chaohui Yu, Zhengda Lu, Jun Xiao, Zhibin Wang, Fan Wang

To make the constructed volumes as close as possible to the surfaces of objects in the scene and the rendered depth more accurate, we propose to perform depth prediction and radiance field reconstruction simultaneously.

Depth Estimation Depth Prediction +1

Points-to-3D: Bridging the Gap between Sparse Points and Shape-Controllable Text-to-3D Generation

no code implementations26 Jul 2023 Chaohui Yu, Qiang Zhou, Jingliang Li, Zhe Zhang, Zhibin Wang, Fan Wang

To better utilize the sparse 3D points, we propose an efficient point cloud guidance loss to adaptively drive the NeRF's geometry to align with the shape of the sparse 3D points.

3D Generation Text to 3D

LMSeg: Language-guided Multi-dataset Segmentation

no code implementations27 Feb 2023 Qiang Zhou, Yuang Liu, Chaohui Yu, Jingliang Li, Zhibin Wang, Fan Wang

Instead of relabeling each dataset with the unified taxonomy, a category-guided decoding module is designed to dynamically guide predictions to each datasets taxonomy.

Image Augmentation Panoptic Segmentation +1

DS-MVSNet: Unsupervised Multi-view Stereo via Depth Synthesis

no code implementations13 Aug 2022 Jingliang Li, Zhengda Lu, Yiqun Wang, Ying Wang, Jun Xiao

To mine the information in probability volume, we creatively synthesize the source depths by splattering the probability volume and depth hypotheses to source views.

Cannot find the paper you are looking for? You can Submit a new open access paper.