no code implementations • 21 Feb 2025 • Ziqian Ni, Sicong Du, Zhenghua Hou, Chenming Wu, Sheng Yang
To evaluate end-to-end autonomous driving systems, a simulation environment based on Novel View Synthesis (NVS) techniques is essential, which synthesizes photo-realistic images and point clouds from previously recorded sequences under new vehicle poses, particularly in cross-lane scenarios.
1 code implementation • 9 Dec 2024 • Zheng Chen, Chenming Wu, Zhelun Shen, Chen Zhao, Weicai Ye, Haocheng Feng, Errui Ding, Song-Hai Zhang
Wide-baseline panoramic images are frequently used in applications like VR and simulations to minimize capturing labor costs and storage needs.
1 code implementation • 29 Nov 2024 • Bojun Xiong, Jialun Liu, Jiakui Hu, Chenming Wu, Jinbo Wu, Xing Liu, Chen Zhao, Errui Ding, Zhouhui Lian
Physically Based Rendering (PBR) materials play a crucial role in modern graphics, enabling photorealistic rendering across diverse environment maps.
no code implementations • 19 Nov 2024 • Hao Li, Yuanyuan Gao, Haosong Peng, Chenming Wu, Weicai Ye, Yufeng Zhan, Chen Zhao, Dingwen Zhang, Jingdong Wang, Junwei Han
This paper presents DGTR, a novel distributed framework for efficient Gaussian reconstruction for sparse-view vast scenes.
no code implementations • 21 Jul 2024 • Yiqun Zhao, Chenming Wu, Binbin Huang, YiHao Zhi, Chen Zhao, Jingdong Wang, Shenghua Gao
Efficient and accurate reconstruction of a relightable, dynamic clothed human avatar from a monocular video is crucial for the entertainment industry.
no code implementations • 26 Jun 2024 • Hao Li, Ming Yuan, Yan Zhang, Chenming Wu, Chen Zhao, Chunyu Song, Haocheng Feng, Errui Ding, Dingwen Zhang, Jingdong Wang
To address this, this paper presents a novel driving view synthesis dataset and benchmark specifically designed for autonomous driving simulations.
no code implementations • 26 Jun 2024 • Hao Li, Jingfeng Li, Dingwen Zhang, Chenming Wu, Jieqi Shi, Chen Zhao, Haocheng Feng, Errui Ding, Jingdong Wang, Junwei Han
Dynamic Gaussian splatting has led to impressive scene reconstruction and image synthesis advances in novel views.
no code implementations • 4 Jun 2024 • Yanmin Wu, Jiarui Meng, Haijie Li, Chenming Wu, Yahao Shi, Xinhua Cheng, Chen Zhao, Haocheng Feng, Errui Ding, Jingdong Wang, Jian Zhang
To ensure robust feature presentation and 3D point-level understanding, we first employ SAM masks without cross-frame associations to train instance features with 3D consistency.
no code implementations • 29 Mar 2024 • Zhuopeng Li, Yilin Zhang, Chenming Wu, Jianke Zhu, Liangjun Zhang
The rapid growth of 3D Gaussian Splatting (3DGS) has revolutionized neural rendering, enabling real-time production of high-quality renderings.
no code implementations • 22 Mar 2024 • Jinbo Wu, Xing Liu, Chenming Wu, Xiaobo Gao, Jialun Liu, Xinqi Liu, Chen Zhao, Haocheng Feng, Errui Ding, Jingdong Wang
We propose an optimal viewpoint selection strategy, that finds the most miniature set of viewpoints covering all the faces of a mesh.
no code implementations • 15 Mar 2024 • Hao Li, Yuanyuan Gao, Chenming Wu, Dingwen Zhang, Yalun Dai, Chen Zhao, Haocheng Feng, Errui Ding, Jingdong Wang, Junwei Han
Specifically, we design a novel joint learning framework that consists of an Iterative Pose Optimization Network (IPO-Net) and a Generalizable 3D-Gaussians (G-3DG) model.
no code implementations • 26 Feb 2024 • Xinqi Liu, Chenming Wu, Jialun Liu, Xing Liu, Jinbo Wu, Chen Zhao, Haocheng Feng, Errui Ding, Jingdong Wang
In this paper, we present a novel method that facilitates the creation of vivid 3D Gaussian avatars from monocular video inputs (GVA).
no code implementations • CVPR 2024 • Jialun Liu, Chenming Wu, Xinqi Liu, Xing Liu, Jinbo Wu, Haotian Peng, Chen Zhao, Haocheng Feng, Jingtuo Liu, Errui Ding
This model gradually reduces the texture noise on the octree nodes resulting in the restoration of fine texture.
no code implementations • CVPR 2024 • Zexian Yang, Dayan Wu, Chenming Wu, Zheng Lin, Jingzi Gu, Weiping Wang
Whiteness the impressive capabilities in multimodal understanding of Vision Language Foundation Model CLIP a recent two-stage CLIP-based method employs automated prompt engineering to obtain specific textual labels for classifying pedestrians.
1 code implementation • 8 Dec 2023 • Yahao Shi, Yanmin Wu, Chenming Wu, Xing Liu, Chen Zhao, Haocheng Feng, Jian Zhang, Bin Zhou, Errui Ding, Jingdong Wang
Our method achieves state-of-the-art performance in both relighting and novel view synthesis tasks among the recently proposed inverse rendering methods while achieving real-time rendering.
no code implementations • 28 Nov 2023 • Zhuopeng Li, Chenming Wu, Liangjun Zhang, Jianke Zhu
Despite the recent success of Neural Radiance Field (NeRF), it is still challenging to render large-scale driving scenes with long trajectories, particularly when the rendering quality and efficiency are in high demand.
1 code implementation • 30 Sep 2023 • Jianhao Yan, Jin Xu, Chiyu Song, Chenming Wu, Yafu Li, Yue Zhang
This paper explores the elusive mechanism underpinning in-context learning in Large Language Models (LLMs).
no code implementations • 8 Aug 2023 • Chen Wang, Jiadai Sun, Lina Liu, Chenming Wu, Zhelun Shen, Dayan Wu, Yuchao Dai, Liangjun Zhang
However, the shape-radiance ambiguity of radiance fields remains a challenge, especially in the sparse viewpoints setting.
no code implementations • 27 Jul 2023 • Chenming Wu, Jiadai Sun, Zhelun Shen, Liangjun Zhang
The key insight is that map information can be utilized as a prior to guiding the training of the radiance fields with uncertainty.
1 code implementation • 11 Jul 2023 • Shukai Liu, Chenming Wu, Ying Li, Liangjun Zhang
This paper presents a new method that uses scores provided by humans instead of pairwise preferences to improve the feedback efficiency of interactive reinforcement learning.
1 code implementation • 13 Jun 2023 • Shi Mao, Chenming Wu, Zhelun Shen, Yifan Wang, Dayan Wu, Liangjun Zhang
This paper presents a method, namely NeuS-PIR, for recovering relightable neural surfaces using pre-integrated rendering from multi-view images or video.
1 code implementation • 29 Jan 2023 • Jin Fang, Dingfu Zhou, Jingjing Zhao, Chenming Wu, Chulin Tang, Cheng-Zhong Xu, Liangjun Zhang
This setting results in two distinct domain gaps: scenarios and sensors, making it difficult to analyze and evaluate the method accurately.
no code implementations • 29 Jun 2021 • Jianhao Yan, Chenming Wu, Fandong Meng, Jie zhou
Current evaluation of an NMT system is usually built upon a heuristic decoding algorithm (e. g., beam search) and an evaluation metric assessing similarity between the translation and golden reference.