Search Results for author: Richard Tucker

Found 14 papers, 7 papers with code

SLIDE: Single Image 3D Photography with Soft Layering and Depth-aware Inpainting

no code implementations ICCV 2021 Varun Jampani, Huiwen Chang, Kyle Sargent, Abhishek Kar, Richard Tucker, Michael Krainin, Dominik Kaeser, William T. Freeman, David Salesin, Brian Curless, Ce Liu

We present SLIDE, a modular and unified system for single image 3D photography that uses a simple yet effective soft layering strategy to better preserve appearance details in novel views.

Consistent Depth of Moving Objects in Video

no code implementations2 Aug 2021 Zhoutong Zhang, Forrester Cole, Richard Tucker, William T. Freeman, Tali Dekel

We present a method to estimate depth of a dynamic scene, containing arbitrary moving objects, from an ordinary video captured with a moving camera.

Depth Estimation Video Editing

KeypointDeformer: Unsupervised 3D Keypoint Discovery for Shape Control

no code implementations CVPR 2021 Tomas Jakab, Richard Tucker, Ameesh Makadia, Jiajun Wu, Noah Snavely, Angjoo Kanazawa

We cast this as the problem of aligning a source 3D object to a target 3D object from the same object category.

De-rendering the World's Revolutionary Artefacts

1 code implementation CVPR 2021 Shangzhe Wu, Ameesh Makadia, Jiajun Wu, Noah Snavely, Richard Tucker, Angjoo Kanazawa

Recent works have shown exciting results in unsupervised image de-rendering -- learning to decompose 3D shape, appearance, and lighting from single-image collections without explicit supervision.

Repopulating Street Scenes

no code implementations CVPR 2021 Yifan Wang, Andrew Liu, Richard Tucker, Jiajun Wu, Brian L. Curless, Steven M. Seitz, Noah Snavely

We present a framework for automatically reconfiguring images of street scenes by populating, depopulating, or repopulating them with objects such as pedestrians or vehicles.

Autonomous Driving

Infinite Nature: Perpetual View Generation of Natural Scenes from a Single Image

1 code implementation ICCV 2021 Andrew Liu, Richard Tucker, Varun Jampani, Ameesh Makadia, Noah Snavely, Angjoo Kanazawa

We introduce the problem of perpetual view generation - long-range generation of novel views corresponding to an arbitrarily long camera trajectory given a single image.

Image Generation Video Generation

MetaSDF: Meta-learning Signed Distance Functions

1 code implementation NeurIPS 2020 Vincent Sitzmann, Eric R. Chan, Richard Tucker, Noah Snavely, Gordon Wetzstein

Neural implicit shape representations are an emerging paradigm that offers many potential benefits over conventional discrete representations, including memory efficiency at a high spatial resolution.

Meta-Learning

Single-View View Synthesis with Multiplane Images

no code implementations CVPR 2020 Richard Tucker, Noah Snavely

A recent strand of work in view synthesis uses deep learning to generate multiplane images (a camera-centric, layered 3D representation) given two or more input images at known viewpoints.

Lighthouse: Predicting Lighting Volumes for Spatially-Coherent Illumination

1 code implementation CVPR 2020 Pratul P. Srinivasan, Ben Mildenhall, Matthew Tancik, Jonathan T. Barron, Richard Tucker, Noah Snavely

We present a deep learning solution for estimating the incident illumination at any 3D location within a scene from an input narrow-baseline stereo image pair.

Pushing the Boundaries of View Extrapolation with Multiplane Images

1 code implementation CVPR 2019 Pratul P. Srinivasan, Richard Tucker, Jonathan T. Barron, Ravi Ramamoorthi, Ren Ng, Noah Snavely

We present a theoretical analysis showing how the range of views that can be rendered from an MPI increases linearly with the MPI disparity sampling frequency, as well as a novel MPI prediction procedure that theoretically enables view extrapolations of up to $4\times$ the lateral viewpoint movement allowed by prior work.

Learning the Depths of Moving People by Watching Frozen People

no code implementations CVPR 2019 Zhengqi Li, Tali Dekel, Forrester Cole, Richard Tucker, Noah Snavely, Ce Liu, William T. Freeman

We present a method for predicting dense depth in scenarios where both a monocular camera and people in the scene are freely moving.

Depth Estimation

Layer-structured 3D Scene Inference via View Synthesis

1 code implementation ECCV 2018 Shubham Tulsiani, Richard Tucker, Noah Snavely

We present an approach to infer a layer-structured 3D representation of a scene from a single input image.

Stereo Magnification: Learning View Synthesis using Multiplane Images

1 code implementation24 May 2018 Tinghui Zhou, Richard Tucker, John Flynn, Graham Fyffe, Noah Snavely

The view synthesis problem--generating novel views of a scene from known imagery--has garnered recent attention due in part to compelling applications in virtual and augmented reality.

Novel View Synthesis

Cannot find the paper you are looking for? You can Submit a new open access paper.