no code implementations • 14 Sep 2023 • Zhengqi Li, Richard Tucker, Noah Snavely, Aleksander Holynski
We present an approach to modeling an image-space prior on scene motion.
1 code implementation • ICCV 2023 • Qianqian Wang, Yen-Yu Chang, Ruojin Cai, Zhengqi Li, Bharath Hariharan, Aleksander Holynski, Noah Snavely
We present a new test-time optimization method for estimating dense and long-range motion from a video sequence.
1 code implementation • CVPR 2023 • Lucy Chai, Richard Tucker, Zhengqi Li, Phillip Isola, Noah Snavely
Despite increasingly realistic image quality, recent 3D image generative models often operate on 3D volumes of fixed extent with limited camera motions.
Ranked #2 on Scene Generation on GoogleEarth
no code implementations • CVPR 2023 • Mohammed Suhail, Erika Lu, Zhengqi Li, Noah Snavely, Leonid Sigal, Forrester Cole
Instead, our method applies recent progress in monocular camera pose and depth estimation to create a full, RGBD video layer for the background, along with a video layer for each foreground object.
1 code implementation • CVPR 2023 • Zhengqi Li, Qianqian Wang, Forrester Cole, Richard Tucker, Noah Snavely
Our system retains the advantages of prior methods in its ability to model complex scenes and view-dependent effects, but also enables synthesizing photo-realistic novel views from long videos featuring complex scene dynamics with unconstrained camera trajectories.
1 code implementation • 22 Jul 2022 • Zhengqi Li, Qianqian Wang, Noah Snavely, Angjoo Kanazawa
We present a method for learning to generate unbounded flythrough videos of natural scenes starting from a single view, where this capability is learned from a collection of single photographs, without requiring camera poses or even multiple views of each scene.
1 code implementation • 25 May 2022 • Jiaming Sun, Xi Chen, Qianqian Wang, Zhengqi Li, Hadar Averbuch-Elor, Xiaowei Zhou, Noah Snavely
We are witnessing an explosion of neural implicit representations in computer vision and graphics.
no code implementations • CVPR 2022 • Qianqian Wang, Zhengqi Li, David Salesin, Noah Snavely, Brian Curless, Janne Kontkanen
As output, we produce a video that smoothly interpolates the scene motion from the first photo to the second, while also producing camera motion with parallax that gives a heightened sense of 3D.
no code implementations • CVPR 2022 • Vickie Ye, Zhengqi Li, Richard Tucker, Angjoo Kanazawa, Noah Snavely
We describe a method to extract persistent elements of a dynamic scene from an input video.
no code implementations • CVPR 2022 • Kai Zhang, Fujun Luan, Zhengqi Li, Noah Snavely
We propose a neural inverse rendering pipeline called IRON that operates on photometric images and outputs high-quality 3D content in the format of triangle meshes and material textures readily deployable in existing graphics pipelines.
3 code implementations • CVPR 2021 • Zhengqi Li, Simon Niklaus, Noah Snavely, Oliver Wang
We present a method to perform novel view and time synthesis of dynamic scenes, requiring only a monocular video with known camera poses as input.
1 code implementation • ECCV 2020 • Zhengqi Li, Wenqi Xian, Abe Davis, Noah Snavely
These photos represent a sparse and unstructured sampling of the plenoptic function for a particular scene.
no code implementations • ICCV 2019 • Wenqi Xian, Zhengqi Li, Matthew Fisher, Jonathan Eisenmann, Eli Shechtman, Noah Snavely
We introduce UprightNet, a learning-based approach for estimating 2DoF camera orientation from a single RGB image of an indoor scene.
no code implementations • CVPR 2019 • Zhengqi Li, Tali Dekel, Forrester Cole, Richard Tucker, Noah Snavely, Ce Liu, William T. Freeman
We present a method for predicting dense depth in scenarios where both a monocular camera and people in the scene are freely moving.
no code implementations • ECCV 2018 • Zhengqi Li, Noah Snavely
Intrinsic image decomposition is a challenging, long-standing computer vision problem for which ground truth data is very difficult to acquire.
1 code implementation • CVPR 2018 • Zhengqi Li, Noah Snavely
However, it is difficult to collect ground truth training data at scale for intrinsic images.
3 code implementations • CVPR 2018 • Zhengqi Li, Noah Snavely
We validate the use of large amounts of Internet data by showing that models trained on MegaDepth exhibit strong generalization-not only to novel scenes, but also to other diverse datasets including Make3D, KITTI, and DIW, even when no images from those datasets are seen during training.