Search Results for author: Richard Tucker

Found 21 papers, 12 papers with code

Streetscapes: Large-scale Consistent Street View Generation Using Autoregressive Video Diffusion

no code implementations18 Jul 2024 Boyang Deng, Richard Tucker, Zhengqi Li, Leonidas Guibas, Noah Snavely, Gordon Wetzstein

To achieve this goal, we build on recent work on video diffusion, used within an autoregressive framework that can easily scale to long sequences.

Imputation Video Generation

Generative Image Dynamics

no code implementations CVPR 2024 Zhengqi Li, Richard Tucker, Noah Snavely, Aleksander Holynski

We present an approach to modeling an image-space prior on scene motion.

Persistent Nature: A Generative Model of Unbounded 3D Worlds

1 code implementation CVPR 2023 Lucy Chai, Richard Tucker, Zhengqi Li, Phillip Isola, Noah Snavely

Despite increasingly realistic image quality, recent 3D image generative models often operate on 3D volumes of fixed extent with limited camera motions.

Ranked #3 on Scene Generation on GoogleEarth (KID metric)

Decoder Scene Generation

DynIBaR: Neural Dynamic Image-Based Rendering

1 code implementation CVPR 2023 Zhengqi Li, Qianqian Wang, Forrester Cole, Richard Tucker, Noah Snavely

Our system retains the advantages of prior methods in its ability to model complex scenes and view-dependent effects, but also enables synthesizing photo-realistic novel views from long videos featuring complex scene dynamics with unconstrained camera trajectories.

Dimensions of Motion: Monocular Prediction through Flow Subspaces

no code implementations2 Dec 2021 Richard Strong Bowen, Richard Tucker, Ramin Zabih, Noah Snavely

We introduce a way to learn to estimate a scene representation from a single image by predicting a low-dimensional subspace of optical flow for each training example, which encompasses the variety of possible camera and object movement.

Depth Estimation Depth Prediction +3

SLIDE: Single Image 3D Photography with Soft Layering and Depth-aware Inpainting

no code implementations ICCV 2021 Varun Jampani, Huiwen Chang, Kyle Sargent, Abhishek Kar, Richard Tucker, Michael Krainin, Dominik Kaeser, William T. Freeman, David Salesin, Brian Curless, Ce Liu

We present SLIDE, a modular and unified system for single image 3D photography that uses a simple yet effective soft layering strategy to better preserve appearance details in novel views.

Image Matting

Consistent Depth of Moving Objects in Video

no code implementations2 Aug 2021 Zhoutong Zhang, Forrester Cole, Richard Tucker, William T. Freeman, Tali Dekel

We present a method to estimate depth of a dynamic scene, containing arbitrary moving objects, from an ordinary video captured with a moving camera.

Depth Estimation Depth Prediction +2

De-rendering the World's Revolutionary Artefacts

1 code implementation CVPR 2021 Shangzhe Wu, Ameesh Makadia, Jiajun Wu, Noah Snavely, Richard Tucker, Angjoo Kanazawa

Recent works have shown exciting results in unsupervised image de-rendering -- learning to decompose 3D shape, appearance, and lighting from single-image collections without explicit supervision.

Repopulating Street Scenes

no code implementations CVPR 2021 Yifan Wang, Andrew Liu, Richard Tucker, Jiajun Wu, Brian L. Curless, Steven M. Seitz, Noah Snavely

We present a framework for automatically reconfiguring images of street scenes by populating, depopulating, or repopulating them with objects such as pedestrians or vehicles.

Autonomous Driving

Infinite Nature: Perpetual View Generation of Natural Scenes from a Single Image

1 code implementation ICCV 2021 Andrew Liu, Richard Tucker, Varun Jampani, Ameesh Makadia, Noah Snavely, Angjoo Kanazawa

We introduce the problem of perpetual view generation - long-range generation of novel views corresponding to an arbitrarily long camera trajectory given a single image.

Image Generation Perpetual View Generation +1

MetaSDF: Meta-learning Signed Distance Functions

2 code implementations NeurIPS 2020 Vincent Sitzmann, Eric R. Chan, Richard Tucker, Noah Snavely, Gordon Wetzstein

Neural implicit shape representations are an emerging paradigm that offers many potential benefits over conventional discrete representations, including memory efficiency at a high spatial resolution.

Decoder Meta-Learning

Single-View View Synthesis with Multiplane Images

1 code implementation CVPR 2020 Richard Tucker, Noah Snavely

A recent strand of work in view synthesis uses deep learning to generate multiplane images (a camera-centric, layered 3D representation) given two or more input images at known viewpoints.

Lighthouse: Predicting Lighting Volumes for Spatially-Coherent Illumination

1 code implementation CVPR 2020 Pratul P. Srinivasan, Ben Mildenhall, Matthew Tancik, Jonathan T. Barron, Richard Tucker, Noah Snavely

We present a deep learning solution for estimating the incident illumination at any 3D location within a scene from an input narrow-baseline stereo image pair.

Lighting Estimation

Pushing the Boundaries of View Extrapolation with Multiplane Images

1 code implementation CVPR 2019 Pratul P. Srinivasan, Richard Tucker, Jonathan T. Barron, Ravi Ramamoorthi, Ren Ng, Noah Snavely

We present a theoretical analysis showing how the range of views that can be rendered from an MPI increases linearly with the MPI disparity sampling frequency, as well as a novel MPI prediction procedure that theoretically enables view extrapolations of up to $4\times$ the lateral viewpoint movement allowed by prior work.

Layer-structured 3D Scene Inference via View Synthesis

1 code implementation ECCV 2018 Shubham Tulsiani, Richard Tucker, Noah Snavely

We present an approach to infer a layer-structured 3D representation of a scene from a single input image.

Stereo Magnification: Learning View Synthesis using Multiplane Images

1 code implementation24 May 2018 Tinghui Zhou, Richard Tucker, John Flynn, Graham Fyffe, Noah Snavely

The view synthesis problem--generating novel views of a scene from known imagery--has garnered recent attention due in part to compelling applications in virtual and augmented reality.

Novel View Synthesis

Cannot find the paper you are looking for? You can Submit a new open access paper.