Search Results for author: Zhengqi Li

Found 17 papers, 9 papers with code

Generative Image Dynamics

no code implementations14 Sep 2023 Zhengqi Li, Richard Tucker, Noah Snavely, Aleksander Holynski

We present an approach to modeling an image-space prior on scene motion.

Persistent Nature: A Generative Model of Unbounded 3D Worlds

1 code implementation CVPR 2023 Lucy Chai, Richard Tucker, Zhengqi Li, Phillip Isola, Noah Snavely

Despite increasingly realistic image quality, recent 3D image generative models often operate on 3D volumes of fixed extent with limited camera motions.

Scene Generation

Omnimatte3D: Associating Objects and Their Effects in Unconstrained Monocular Video

no code implementations CVPR 2023 Mohammed Suhail, Erika Lu, Zhengqi Li, Noah Snavely, Leonid Sigal, Forrester Cole

Instead, our method applies recent progress in monocular camera pose and depth estimation to create a full, RGBD video layer for the background, along with a video layer for each foreground object.

Depth Estimation

DynIBaR: Neural Dynamic Image-Based Rendering

1 code implementation CVPR 2023 Zhengqi Li, Qianqian Wang, Forrester Cole, Richard Tucker, Noah Snavely

Our system retains the advantages of prior methods in its ability to model complex scenes and view-dependent effects, but also enables synthesizing photo-realistic novel views from long videos featuring complex scene dynamics with unconstrained camera trajectories.

InfiniteNature-Zero: Learning Perpetual View Generation of Natural Scenes from Single Images

1 code implementation22 Jul 2022 Zhengqi Li, Qianqian Wang, Noah Snavely, Angjoo Kanazawa

We present a method for learning to generate unbounded flythrough videos of natural scenes starting from a single view, where this capability is learned from a collection of single photographs, without requiring camera poses or even multiple views of each scene.

Perpetual View Generation

Neural 3D Reconstruction in the Wild

1 code implementation25 May 2022 Jiaming Sun, Xi Chen, Qianqian Wang, Zhengqi Li, Hadar Averbuch-Elor, Xiaowei Zhou, Noah Snavely

We are witnessing an explosion of neural implicit representations in computer vision and graphics.

3D Reconstruction Surface Reconstruction

3D Moments from Near-Duplicate Photos

no code implementations CVPR 2022 Qianqian Wang, Zhengqi Li, David Salesin, Noah Snavely, Brian Curless, Janne Kontkanen

As output, we produce a video that smoothly interpolates the scene motion from the first photo to the second, while also producing camera motion with parallax that gives a heightened sense of 3D.

Motion Interpolation

IRON: Inverse Rendering by Optimizing Neural SDFs and Materials from Photometric Images

no code implementations CVPR 2022 Kai Zhang, Fujun Luan, Zhengqi Li, Noah Snavely

We propose a neural inverse rendering pipeline called IRON that operates on photometric images and outputs high-quality 3D content in the format of triangle meshes and material textures readily deployable in existing graphics pipelines.

Disentanglement Inverse Rendering

Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes

3 code implementations CVPR 2021 Zhengqi Li, Simon Niklaus, Noah Snavely, Oliver Wang

We present a method to perform novel view and time synthesis of dynamic scenes, requiring only a monocular video with known camera poses as input.

Crowdsampling the Plenoptic Function

1 code implementation ECCV 2020 Zhengqi Li, Wenqi Xian, Abe Davis, Noah Snavely

These photos represent a sparse and unstructured sampling of the plenoptic function for a particular scene.

Neural Rendering Novel View Synthesis

UprightNet: Geometry-Aware Camera Orientation Estimation from Single Images

no code implementations ICCV 2019 Wenqi Xian, Zhengqi Li, Matthew Fisher, Jonathan Eisenmann, Eli Shechtman, Noah Snavely

We introduce UprightNet, a learning-based approach for estimating 2DoF camera orientation from a single RGB image of an indoor scene.

Camera Calibration

CGIntrinsics: Better Intrinsic Image Decomposition through Physically-Based Rendering

no code implementations ECCV 2018 Zhengqi Li, Noah Snavely

Intrinsic image decomposition is a challenging, long-standing computer vision problem for which ground truth data is very difficult to acquire.

Intrinsic Image Decomposition

MegaDepth: Learning Single-View Depth Prediction from Internet Photos

3 code implementations CVPR 2018 Zhengqi Li, Noah Snavely

We validate the use of large amounts of Internet data by showing that models trained on MegaDepth exhibit strong generalization-not only to novel scenes, but also to other diverse datasets including Make3D, KITTI, and DIW, even when no images from those datasets are seen during training.

Depth Estimation Depth Prediction +1

Cannot find the paper you are looking for? You can Submit a new open access paper.