Search Results for author: Yingliang Zhang

Found 12 papers, 3 papers with code

LetsGo: Large-Scale Garage Modeling and Rendering via LiDAR-Assisted Gaussian Primitives

no code implementations15 Apr 2024 Jiadi Cui, Junming Cao, Yuhui Zhong, Liao Wang, Fuqiang Zhao, Penghao Wang, Yifan Chen, Zhipeng He, Lan Xu, Yujiao Shi, Yingliang Zhang, Jingyi Yu

We demonstrate that the collected LiDAR point cloud by the Polar device enhances a suite of 3D Gaussian splatting algorithms for garage scene modeling and rendering.

3D Reconstruction Pose Estimation

HiFi4G: High-Fidelity Human Performance Rendering via Compact Gaussian Splatting

no code implementations6 Dec 2023 Yuheng Jiang, Zhehao Shen, Penghao Wang, Zhuo Su, Yu Hong, Yingliang Zhang, Jingyi Yu, Lan Xu

Then, we utilize a 4D Gaussian optimization scheme with adaptive spatial-temporal regularizers to effectively balance the non-rigid prior and Gaussian updating.

Human Performance Modeling and Rendering via Neural Animated Mesh

1 code implementation18 Sep 2022 Fuqiang Zhao, Yuheng Jiang, Kaixin Yao, Jiakai Zhang, Liao Wang, Haizhao Dai, Yuhui Zhong, Yingliang Zhang, Minye Wu, Lan Xu, Jingyi Yu

In this paper, we present a comprehensive neural approach for high-quality reconstruction, compression, and rendering of human performances from dense multi-view videos.

Fourier PlenOctrees for Dynamic Radiance Field Rendering in Real-time

no code implementations CVPR 2022 Liao Wang, Jiakai Zhang, Xinhang Liu, Fuqiang Zhao, Yanshun Zhang, Yingliang Zhang, Minye Wu, Lan Xu, Jingyi Yu

In this paper, we present a novel Fourier PlenOctree (FPO) technique to tackle efficient neural modeling and real-time rendering of dynamic scenes captured under the free-view video (FVV) setting.

Artemis: Articulated Neural Pets with Appearance and Motion synthesis

1 code implementation11 Feb 2022 Haimin Luo, Teng Xu, Yuheng Jiang, Chenglin Zhou, QIwei Qiu, Yingliang Zhang, Wei Yang, Lan Xu, Jingyi Yu

Our ARTEMIS enables interactive motion control, real-time animation, and photo-realistic rendering of furry animals.

Motion Synthesis

HumanNeRF: Efficiently Generated Human Radiance Field from Sparse Inputs

no code implementations CVPR 2022 Fuqiang Zhao, Wei Yang, Jiakai Zhang, Pei Lin, Yingliang Zhang, Jingyi Yu, Lan Xu

The raw HumanNeRF can already produce reasonable rendering on sparse video inputs of unseen subjects and camera settings.

Editable Free-viewpoint Video Using a Layered Neural Representation

1 code implementation30 Apr 2021 Jiakai Zhang, Xinhang Liu, Xinyi Ye, Fuqiang Zhao, Yanshun Zhang, Minye Wu, Yingliang Zhang, Lan Xu, Jingyi Yu

Such layered representation supports fully perception and realistic manipulation of the dynamic scene whilst still supporting a free viewing experience in a wide range.

Disentanglement Scene Parsing +1

NeuralHumanFVV: Real-Time Neural Volumetric Human Performance Rendering using RGB Cameras

no code implementations CVPR 2021 Xin Suo, Yuheng Jiang, Pei Lin, Yingliang Zhang, Kaiwen Guo, Minye Wu, Lan Xu

4D reconstruction and rendering of human activities is critical for immersive VR/AR experience. Recent advances still fail to recover fine geometry and texture results with the level of detail present in the input images from sparse multi-view RGB cameras.

4D reconstruction Multi-Task Learning

Deep Surface Light Fields

no code implementations15 Oct 2018 Anpei Chen, Minye Wu, Yingliang Zhang, Nianyi Li, Jie Lu, Shenghua Gao, Jingyi Yu

A surface light field represents the radiance of rays originating from any points on the surface in any directions.

Data Compression Image Registration

Semantic See-Through Rendering on Light Fields

no code implementations26 Mar 2018 Huangjie Yu, Guli Zhang, Yuanxi Ma, Yingliang Zhang, Jingyi Yu

We present a novel semantic light field (LF) refocusing technique that can achieve unprecedented see-through quality.

Stereo Matching Stereo Matching Hand

Ray Space Features for Plenoptic Structure-From-Motion

no code implementations ICCV 2017 Yingliang Zhang, Peihong Yu, Wei Yang, Yuanxi Ma, Jingyi Yu

In this paper, we explore using light fields captured by plenoptic cameras or camera arrays as inputs.

Cannot find the paper you are looking for? You can Submit a new open access paper.