Search Results for author: Zhengqi Li

Found 13 papers, 5 papers with code

DynIBaR: Neural Dynamic Image-Based Rendering

no code implementations20 Nov 2022 Zhengqi Li, Qianqian Wang, Forrester Cole, Richard Tucker, Noah Snavely

Our system retains the advantages of prior methods in its ability to model complex scenes and view-dependent effects, but also enables synthesizing photo-realistic novel views from long videos featuring complex scene dynamics with unconstrained camera trajectories.

InfiniteNature-Zero: Learning Perpetual View Generation of Natural Scenes from Single Images

1 code implementation22 Jul 2022 Zhengqi Li, Qianqian Wang, Noah Snavely, Angjoo Kanazawa

We present a method for learning to generate unbounded flythrough videos of natural scenes starting from a single view, where this capability is learned from a collection of single photographs, without requiring camera poses or even multiple views of each scene.

Perpetual View Generation

3D Moments from Near-Duplicate Photos

no code implementations CVPR 2022 Qianqian Wang, Zhengqi Li, David Salesin, Noah Snavely, Brian Curless, Janne Kontkanen

As output, we produce a video that smoothly interpolates the scene motion from the first photo to the second, while also producing camera motion with parallax that gives a heightened sense of 3D.

Motion Interpolation

IRON: Inverse Rendering by Optimizing Neural SDFs and Materials from Photometric Images

no code implementations CVPR 2022 Kai Zhang, Fujun Luan, Zhengqi Li, Noah Snavely

We propose a neural inverse rendering pipeline called IRON that operates on photometric images and outputs high-quality 3D content in the format of triangle meshes and material textures readily deployable in existing graphics pipelines.

Disentanglement

Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes

3 code implementations CVPR 2021 Zhengqi Li, Simon Niklaus, Noah Snavely, Oliver Wang

We present a method to perform novel view and time synthesis of dynamic scenes, requiring only a monocular video with known camera poses as input.

Crowdsampling the Plenoptic Function

1 code implementation ECCV 2020 Zhengqi Li, Wenqi Xian, Abe Davis, Noah Snavely

These photos represent a sparse and unstructured sampling of the plenoptic function for a particular scene.

Neural Rendering Novel View Synthesis

UprightNet: Geometry-Aware Camera Orientation Estimation from Single Images

no code implementations ICCV 2019 Wenqi Xian, Zhengqi Li, Matthew Fisher, Jonathan Eisenmann, Eli Shechtman, Noah Snavely

We introduce UprightNet, a learning-based approach for estimating 2DoF camera orientation from a single RGB image of an indoor scene.

CGIntrinsics: Better Intrinsic Image Decomposition through Physically-Based Rendering

no code implementations ECCV 2018 Zhengqi Li, Noah Snavely

Intrinsic image decomposition is a challenging, long-standing computer vision problem for which ground truth data is very difficult to acquire.

Intrinsic Image Decomposition

MegaDepth: Learning Single-View Depth Prediction from Internet Photos

1 code implementation CVPR 2018 Zhengqi Li, Noah Snavely

We validate the use of large amounts of Internet data by showing that models trained on MegaDepth exhibit strong generalization-not only to novel scenes, but also to other diverse datasets including Make3D, KITTI, and DIW, even when no images from those datasets are seen during training.

Depth Estimation Depth Prediction +1

Cannot find the paper you are looking for? You can Submit a new open access paper.