Search Results for author: Noah Snavely

Found 77 papers, 42 papers with code

Graph-Based Discriminative Learning for Location Recognition

no code implementations CVPR 2013 Song Cao, Noah Snavely

For a query image, each database image is ranked according to these local distance functions in order to place the image in the right part of the graph.

Photometric Ambient Occlusion

1 code implementation CVPR 2013 Daniel Hauagge, Scott Wehrwein, Kavita Bala, Noah Snavely

We present a method for computing ambient occlusion (AO) for a stack of images of a scene from a fixed viewpoint.

Material Recognition in the Wild with the Materials in Context Database

no code implementations CVPR 2015 Sean Bell, Paul Upchurch, Noah Snavely, Kavita Bala

In this paper, we introduce a new, large-scale, open dataset of materials in the wild, the Materials in Context Database (MINC), and combine this dataset with deep learning to achieve material recognition and segmentation of images in the wild.

Material Recognition Segmentation

DeepStereo: Learning to Predict New Views from the World's Imagery

1 code implementation CVPR 2016 John Flynn, Ivan Neulander, James Philbin, Noah Snavely

To our knowledge, our work is the first to apply deep learning to the problem of new view synthesis from sets of real-world, natural imagery.

BubbLeNet: Foveated Imaging for Visual Discovery

no code implementations ICCV 2015 Kevin Matzen, Noah Snavely

We propose a new method for turning an Internet-scale corpus of categorized images into a small set of human-interpretable discriminative visual elements using powerful tools based on deep learning.

From A to Z: Supervised Transfer of Style and Content Using Deep Neural Network Generators

no code implementations7 Mar 2016 Paul Upchurch, Noah Snavely, Kavita Bala

We propose a new neural network architecture for solving single-image analogies - the generation of an entire set of stylistically similar images from just a single input image.

Deep Feature Interpolation for Image Content Changes

2 code implementations CVPR 2017 Paul Upchurch, Jacob Gardner, Geoff Pleiss, Robert Pless, Noah Snavely, Kavita Bala, Kilian Weinberger

We propose Deep Feature Interpolation (DFI), a new data-driven baseline for automatic high-resolution image transformation.

Unsupervised Learning of Depth and Ego-Motion from Video

2 code implementations CVPR 2017 Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe

We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences.

Depth And Camera Motion Motion Estimation +1

Shading Annotations in the Wild

no code implementations CVPR 2017 Balazs Kovacs, Sean Bell, Noah Snavely, Kavita Bala

We demonstrate the value of our data and network in an application to intrinsic images, where we can reduce decomposition artifacts produced by existing algorithms.

Image Relighting Intrinsic Image Decomposition +2

StreetStyle: Exploring world-wide clothing styles from millions of photos

2 code implementations6 Jun 2017 Kevin Matzen, Kavita Bala, Noah Snavely

Each day billions of photographs are uploaded to photo-sharing services and social media platforms.

Attribute

MegaDepth: Learning Single-View Depth Prediction from Internet Photos

3 code implementations CVPR 2018 Zhengqi Li, Noah Snavely

We validate the use of large amounts of Internet data by showing that models trained on MegaDepth exhibit strong generalization-not only to novel scenes, but also to other diverse datasets including Make3D, KITTI, and DIW, even when no images from those datasets are seen during training.

Depth Estimation Depth Prediction +1

Stereo Magnification: Learning View Synthesis using Multiplane Images

1 code implementation24 May 2018 Tinghui Zhou, Richard Tucker, John Flynn, Graham Fyffe, Noah Snavely

The view synthesis problem--generating novel views of a scene from known imagery--has garnered recent attention due in part to compelling applications in virtual and augmented reality.

Novel View Synthesis

Discovery of Latent 3D Keypoints via End-to-end Geometric Reasoning

1 code implementation NeurIPS 2018 Supasorn Suwajanakorn, Noah Snavely, Jonathan Tompson, Mohammad Norouzi

We demonstrate this framework on 3D pose estimation by proposing a differentiable objective that seeks the optimal set of keypoints for recovering the relative pose between two views of an object.

3D Pose Estimation

Layer-structured 3D Scene Inference via View Synthesis

1 code implementation ECCV 2018 Shubham Tulsiani, Richard Tucker, Noah Snavely

We present an approach to infer a layer-structured 3D representation of a scene from a single input image.

CGIntrinsics: Better Intrinsic Image Decomposition through Physically-Based Rendering

no code implementations ECCV 2018 Zhengqi Li, Noah Snavely

Intrinsic image decomposition is a challenging, long-standing computer vision problem for which ground truth data is very difficult to acquire.

Intrinsic Image Decomposition

Neural Rerendering in the Wild

no code implementations CVPR 2019 Moustafa Meshry, Dan B. Goldman, Sameh Khamis, Hugues Hoppe, Rohit Pandey, Noah Snavely, Ricardo Martin-Brualla

Starting from internet photos of a tourist landmark, we apply traditional 3D reconstruction to register the photos and approximate the scene as a point cloud.

3D Reconstruction

Pushing the Boundaries of View Extrapolation with Multiplane Images

1 code implementation CVPR 2019 Pratul P. Srinivasan, Richard Tucker, Jonathan T. Barron, Ravi Ramamoorthi, Ren Ng, Noah Snavely

We present a theoretical analysis showing how the range of views that can be rendered from an MPI increases linearly with the MPI disparity sampling frequency, as well as a novel MPI prediction procedure that theoretically enables view extrapolations of up to $4\times$ the lateral viewpoint movement allowed by prior work.

UprightNet: Geometry-Aware Camera Orientation Estimation from Single Images

no code implementations ICCV 2019 Wenqi Xian, Zhengqi Li, Matthew Fisher, Jonathan Eisenmann, Eli Shechtman, Noah Snavely

We introduce UprightNet, a learning-based approach for estimating 2DoF camera orientation from a single RGB image of an indoor scene.

Camera Calibration

GeoStyle: Discovering Fashion Trends and Events

1 code implementation ICCV 2019 Utkarsh Mall, Kevin Matzen, Bharath Hariharan, Noah Snavely, Kavita Bala

Understanding fashion styles and trends is of great potential interest to retailers and consumers alike.

Leveraging Vision Reconstruction Pipelines for Satellite Imagery

no code implementations7 Oct 2019 Kai Zhang, Jin Sun, Noah Snavely

Reconstructing 3D geometry from satellite imagery is an important topic of research.

3D Reconstruction

Lighthouse: Predicting Lighting Volumes for Spatially-Coherent Illumination

1 code implementation CVPR 2020 Pratul P. Srinivasan, Ben Mildenhall, Matthew Tancik, Jonathan T. Barron, Richard Tucker, Noah Snavely

We present a deep learning solution for estimating the incident illumination at any 3D location within a scene from an input narrow-baseline stereo image pair.

Lighting Estimation

Depth Sensing Beyond LiDAR Range

no code implementations CVPR 2020 Kai Zhang, Jiaxin Xie, Noah Snavely, Qifeng Chen

Depth sensing is a critical component of autonomous driving technologies, but today's LiDAR- or stereo camera-based solutions have limited range.

Autonomous Driving

Single-View View Synthesis with Multiplane Images

1 code implementation CVPR 2020 Richard Tucker, Noah Snavely

A recent strand of work in view synthesis uses deep learning to generate multiplane images (a camera-centric, layered 3D representation) given two or more input images at known viewpoints.

Learning Feature Descriptors using Camera Pose Supervision

1 code implementation ECCV 2020 Qianqian Wang, Xiaowei Zhou, Bharath Hariharan, Noah Snavely

Recent research on learned visual descriptors has shown promising improvements in correspondence estimation, a key component of many 3D vision tasks.

MetaSDF: Meta-learning Signed Distance Functions

2 code implementations NeurIPS 2020 Vincent Sitzmann, Eric R. Chan, Richard Tucker, Noah Snavely, Gordon Wetzstein

Neural implicit shape representations are an emerging paradigm that offers many potential benefits over conventional discrete representations, including memory efficiency at a high spatial resolution.

Meta-Learning

An Analysis of SVD for Deep Rotation Estimation

2 code implementations NeurIPS 2020 Jake Levinson, Carlos Esteves, Kefan Chen, Noah Snavely, Angjoo Kanazawa, Afshin Rostamizadeh, Ameesh Makadia

Symmetric orthogonalization via SVD, and closely related procedures, are well-known techniques for projecting matrices onto $O(n)$ or $SO(n)$.

3D Pose Estimation 3D Rotation Estimation

Crowdsampling the Plenoptic Function

1 code implementation ECCV 2020 Zhengqi Li, Wenqi Xian, Abe Davis, Noah Snavely

These photos represent a sparse and unstructured sampling of the plenoptic function for a particular scene.

Neural Rendering Novel View Synthesis

Learning to Factorize and Relight a City

no code implementations ECCV 2020 Andrew Liu, Shiry Ginosar, Tinghui Zhou, Alexei A. Efros, Noah Snavely

We propose a learning-based framework for disentangling outdoor scenes into temporally-varying illumination and permanent scene factors.

Intrinsic Image Decomposition

Hidden Footprints: Learning Contextual Walkability from 3D Human Trails

no code implementations ECCV 2020 Jin Sun, Hadar Averbuch-Elor, Qianqian Wang, Noah Snavely

Predicting where people can walk in a scene is important for many tasks, including autonomous driving systems and human behavior analysis.

Autonomous Driving valid

NeRF++: Analyzing and Improving Neural Radiance Fields

5 code implementations15 Oct 2020 Kai Zhang, Gernot Riegler, Noah Snavely, Vladlen Koltun

Neural Radiance Fields (NeRF) achieve impressive view synthesis results for a variety of capture settings, including 360 capture of bounded scenes and forward-facing capture of bounded and unbounded scenes.

Multi-Plane Program Induction with 3D Box Priors

no code implementations NeurIPS 2020 Yikai Li, Jiayuan Mao, Xiuming Zhang, William T. Freeman, Joshua B. Tenenbaum, Noah Snavely, Jiajun Wu

We consider two important aspects in understanding and editing images: modeling regular, program-like texture or patterns in 2D planes, and 3D posing of these planes in the scene.

Program induction Program Synthesis

Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes

3 code implementations CVPR 2021 Zhengqi Li, Simon Niklaus, Noah Snavely, Oliver Wang

We present a method to perform novel view and time synthesis of dynamic scenes, requiring only a monocular video with known camera poses as input.

An Ethical Highlighter for People-Centric Dataset Creation

no code implementations27 Nov 2020 Margot Hanley, Apoorv Khandelwal, Hadar Averbuch-Elor, Noah Snavely, Helen Nissenbaum

Important ethical concerns arising from computer vision datasets of people have been receiving significant attention, and a number of datasets have been withdrawn as a result.

Infinite Nature: Perpetual View Generation of Natural Scenes from a Single Image

1 code implementation ICCV 2021 Andrew Liu, Richard Tucker, Varun Jampani, Ameesh Makadia, Noah Snavely, Angjoo Kanazawa

We introduce the problem of perpetual view generation - long-range generation of novel views corresponding to an arbitrarily long camera trajectory given a single image.

Image Generation Perpetual View Generation +1

IBRNet: Learning Multi-View Image-Based Rendering

1 code implementation CVPR 2021 Qianqian Wang, Zhicheng Wang, Kyle Genova, Pratul Srinivasan, Howard Zhou, Jonathan T. Barron, Ricardo Martin-Brualla, Noah Snavely, Thomas Funkhouser

Unlike neural scene representation work that optimizes per-scene functions for rendering, we learn a generic view interpolation function that generalizes to novel scenes.

Neural Rendering Novel View Synthesis

Repopulating Street Scenes

no code implementations CVPR 2021 Yifan Wang, Andrew Liu, Richard Tucker, Jiajun Wu, Brian L. Curless, Steven M. Seitz, Noah Snavely

We present a framework for automatically reconfiguring images of street scenes by populating, depopulating, or repopulating them with objects such as pedestrians or vehicles.

Autonomous Driving

PhySG: Inverse Rendering with Spherical Gaussians for Physics-based Material Editing and Relighting

no code implementations CVPR 2021 Kai Zhang, Fujun Luan, Qianqian Wang, Kavita Bala, Noah Snavely

We present PhySG, an end-to-end inverse rendering pipeline that includes a fully differentiable renderer and can reconstruct geometry, materials, and illumination from scratch from a set of RGB input images.

Depth Prediction Image Relighting +3

De-rendering the World's Revolutionary Artefacts

1 code implementation CVPR 2021 Shangzhe Wu, Ameesh Makadia, Jiajun Wu, Noah Snavely, Richard Tucker, Angjoo Kanazawa

Recent works have shown exciting results in unsupervised image de-rendering -- learning to decompose 3D shape, appearance, and lighting from single-image collections without explicit supervision.

Extreme Rotation Estimation using Dense Correlation Volumes

1 code implementation CVPR 2021 Ruojin Cai, Bharath Hariharan, Noah Snavely, Hadar Averbuch-Elor

We present a technique for estimating the relative 3D rotation of an RGB image pair in an extreme setting, where the images have little or no overlap.

Feature Correlation

Wide-Baseline Relative Camera Pose Estimation with Directional Learning

1 code implementation CVPR 2021 Kefan Chen, Noah Snavely, Ameesh Makadia

Modern deep learning techniques that regress the relative camera pose between two images have difficulty dealing with challenging scenarios, such as large camera motions resulting in occlusions and significant changes in perspective that leave little overlap between images.

Pose Estimation regression

Towers of Babel: Combining Images, Language, and 3D Geometry for Learning Multimodal Vision

1 code implementation ICCV 2021 Xiaoshi Wu, Hadar Averbuch-Elor, Jin Sun, Noah Snavely

The abundance and richness of Internet photos of landmarks and cities has led to significant progress in 3D vision over the past two decades, including automated 3D reconstructions of the world's landmarks from tourist photos.

Descriptive Image Captioning +1

Who's Waldo? Linking People Across Text and Images

1 code implementation ICCV 2021 Claire Yuqing Cui, Apoorv Khandelwal, Yoav Artzi, Noah Snavely, Hadar Averbuch-Elor

We present a task and benchmark dataset for person-centric visual grounding, the problem of linking between people named in a caption and people pictured in an image.

 Ranked #1 on Person-centric Visual Grounding on Who’s Waldo (using extra training data)

Person-centric Visual Grounding

Dimensions of Motion: Monocular Prediction through Flow Subspaces

no code implementations2 Dec 2021 Richard Strong Bowen, Richard Tucker, Ramin Zabih, Noah Snavely

We introduce a way to learn to estimate a scene representation from a single image by predicting a low-dimensional subspace of optical flow for each training example, which encompasses the variety of possible camera and object movement.

Depth Estimation Depth Prediction +3

IRON: Inverse Rendering by Optimizing Neural SDFs and Materials from Photometric Images

no code implementations CVPR 2022 Kai Zhang, Fujun Luan, Zhengqi Li, Noah Snavely

We propose a neural inverse rendering pipeline called IRON that operates on photometric images and outputs high-quality 3D content in the format of triangle meshes and material textures readily deployable in existing graphics pipelines.

Disentanglement Inverse Rendering

3D Moments from Near-Duplicate Photos

no code implementations CVPR 2022 Qianqian Wang, Zhengqi Li, David Salesin, Noah Snavely, Brian Curless, Janne Kontkanen

As output, we produce a video that smoothly interpolates the scene motion from the first photo to the second, while also producing camera motion with parallax that gives a heightened sense of 3D.

Motion Interpolation

Neural 3D Reconstruction in the Wild

1 code implementation25 May 2022 Jiaming Sun, Xi Chen, Qianqian Wang, Zhengqi Li, Hadar Averbuch-Elor, Xiaowei Zhou, Noah Snavely

We are witnessing an explosion of neural implicit representations in computer vision and graphics.

3D Reconstruction Surface Reconstruction

ARF: Artistic Radiance Fields

1 code implementation13 Jun 2022 Kai Zhang, Nick Kolkin, Sai Bi, Fujun Luan, Zexiang Xu, Eli Shechtman, Noah Snavely

We present a method for transferring the artistic features of an arbitrary style image to a 3D scene.

InfiniteNature-Zero: Learning Perpetual View Generation of Natural Scenes from Single Images

1 code implementation22 Jul 2022 Zhengqi Li, Qianqian Wang, Noah Snavely, Angjoo Kanazawa

We present a method for learning to generate unbounded flythrough videos of natural scenes starting from a single view, where this capability is learned from a collection of single photographs, without requiring camera poses or even multiple views of each scene.

Perpetual View Generation

im2nerf: Image to Neural Radiance Field in the Wild

no code implementations8 Sep 2022 Lu Mi, Abhijit Kundu, David Ross, Frank Dellaert, Noah Snavely, Alireza Fathi

We take a step towards addressing this shortcoming by introducing a model that encodes the input image into a disentangled object representation that contains a code for object shape, a code for object appearance, and an estimated camera pose from which the object image is captured.

Novel View Synthesis Object

FactorMatte: Redefining Video Matting for Re-Composition Tasks

no code implementations3 Nov 2022 Zeqi Gu, Wenqi Xian, Noah Snavely, Abe Davis

Based on this observation, we present a method for solving the factor matting problem that produces useful decompositions even for video with complex cross-layer interactions like splashes, shadows, and reflections.

counterfactual Image Matting +1

DynIBaR: Neural Dynamic Image-Based Rendering

1 code implementation CVPR 2023 Zhengqi Li, Qianqian Wang, Forrester Cole, Richard Tucker, Noah Snavely

Our system retains the advantages of prior methods in its ability to model complex scenes and view-dependent effects, but also enables synthesizing photo-realistic novel views from long videos featuring complex scene dynamics with unconstrained camera trajectories.

Seeing a Rose in Five Thousand Ways

no code implementations CVPR 2023 Yunzhi Zhang, Shangzhe Wu, Noah Snavely, Jiajun Wu

These instances all share the same intrinsics, but appear different due to a combination of variance within these intrinsics and differences in extrinsic factors, such as pose and illumination.

Image Generation Intrinsic Image Decomposition +1

Omnimatte3D: Associating Objects and Their Effects in Unconstrained Monocular Video

no code implementations CVPR 2023 Mohammed Suhail, Erika Lu, Zhengqi Li, Noah Snavely, Leonid Sigal, Forrester Cole

Instead, our method applies recent progress in monocular camera pose and depth estimation to create a full, RGBD video layer for the background, along with a video layer for each foreground object.

Depth Estimation

Persistent Nature: A Generative Model of Unbounded 3D Worlds

1 code implementation CVPR 2023 Lucy Chai, Richard Tucker, Zhengqi Li, Phillip Isola, Noah Snavely

Despite increasingly realistic image quality, recent 3D image generative models often operate on 3D volumes of fixed extent with limited camera motions.

Scene Generation

ASIC: Aligning Sparse in-the-wild Image Collections

no code implementations ICCV 2023 Kamal Gupta, Varun Jampani, Carlos Esteves, Abhinav Shrivastava, Ameesh Makadia, Noah Snavely, Abhishek Kar

We present a self-supervised technique that directly optimizes on a sparse collection of images of a particular object/object category to obtain consistent dense correspondences across the collection.

Object

Neural Lens Modeling

no code implementations CVPR 2023 Wenqi Xian, Aljaž Božič, Noah Snavely, Christoph Lassner

Recent methods for 3D reconstruction and rendering increasingly benefit from end-to-end optimization of the entire image formation process.

3D Reconstruction Camera Calibration

Neural Scene Chronology

1 code implementation CVPR 2023 Haotong Lin, Qianqian Wang, Ruojin Cai, Sida Peng, Hadar Averbuch-Elor, Xiaowei Zhou, Noah Snavely

Specifically, we represent the scene as a space-time radiance field with a per-image illumination embedding, where temporally-varying scene changes are encoded using a set of learned step functions.

Doppelgangers: Learning to Disambiguate Images of Similar Structures

1 code implementation ICCV 2023 Ruojin Cai, Joseph Tung, Qianqian Wang, Hadar Averbuch-Elor, Bharath Hariharan, Noah Snavely

Our evaluation shows that our method can distinguish illusory matches in difficult cases, and can be integrated into SfM pipelines to produce correct, disambiguated 3D reconstructions.

3D Reconstruction Binary Classification

Generative Image Dynamics

no code implementations14 Sep 2023 Zhengqi Li, Richard Tucker, Noah Snavely, Aleksander Holynski

We present an approach to modeling an image-space prior on scene motion.

NeRFiller: Completing Scenes via Generative 3D Inpainting

no code implementations7 Dec 2023 Ethan Weber, Aleksander Hołyński, Varun Jampani, Saurabh Saxena, Noah Snavely, Abhishek Kar, Angjoo Kanazawa

In contrast to related works, we focus on completing scenes rather than deleting foreground objects, and our approach does not require tight 2D object masks or text.

3D Inpainting

PhysDreamer: Physics-Based Interaction with 3D Objects via Video Generation

no code implementations19 Apr 2024 Tianyuan Zhang, Hong-Xing Yu, Rundi Wu, Brandon Y. Feng, Changxi Zheng, Noah Snavely, Jiajun Wu, William T. Freeman

Unlike unconditional or text-conditioned dynamics generation, action-conditioned dynamics requires perceiving the physical material properties of objects and grounding the 3D motion prediction on these properties, such as object stiffness.

Cannot find the paper you are looking for? You can Submit a new open access paper.