Search Results for author: Matthew Tancik

Found 14 papers, 11 papers with code

Plenoxels: Radiance Fields without Neural Networks

2 code implementations9 Dec 2021 Alex Yu, Sara Fridovich-Keil, Matthew Tancik, Qinhong Chen, Benjamin Recht, Angjoo Kanazawa

We introduce Plenoxels (plenoptic voxels), a system for photorealistic view synthesis.

PlenOctrees for Real-time Rendering of Neural Radiance Fields

3 code implementations ICCV 2021 Alex Yu, RuiLong Li, Matthew Tancik, Hao Li, Ren Ng, Angjoo Kanazawa

We introduce a method to render Neural Radiance Fields (NeRFs) in real time using PlenOctrees, an octree-based 3D representation which supports view-dependent effects.

Neural Rendering Novel View Synthesis

Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields

1 code implementation ICCV 2021 Jonathan T. Barron, Ben Mildenhall, Matthew Tancik, Peter Hedman, Ricardo Martin-Brualla, Pratul P. Srinivasan

Mip-NeRF is also able to match the accuracy of a brute-force supersampled NeRF on our multiscale dataset while being 22x faster.

NeRV: Neural Reflectance and Visibility Fields for Relighting and View Synthesis

no code implementations CVPR 2021 Pratul P. Srinivasan, Boyang Deng, Xiuming Zhang, Matthew Tancik, Ben Mildenhall, Jonathan T. Barron

We present a method that takes as input a set of images of a scene illuminated by unconstrained known lighting, and produces as output a 3D representation that can be rendered from novel viewpoints under arbitrary lighting conditions.

Learned Initializations for Optimizing Coordinate-Based Neural Representations

2 code implementations CVPR 2021 Matthew Tancik, Ben Mildenhall, Terrance Wang, Divi Schmidt, Pratul P. Srinivasan, Jonathan T. Barron, Ren Ng

Coordinate-based neural representations have shown significant promise as an alternative to discrete, array-based representations for complex low dimensional signals.


pixelNeRF: Neural Radiance Fields from One or Few Images

2 code implementations CVPR 2021 Alex Yu, Vickie Ye, Matthew Tancik, Angjoo Kanazawa

This allows the network to be trained across multiple scenes to learn a scene prior, enabling it to perform novel view synthesis in a feed-forward manner from a sparse set of views (as few as one).

3D Reconstruction Novel View Synthesis

Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains

9 code implementations NeurIPS 2020 Matthew Tancik, Pratul P. Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan T. Barron, Ren Ng

We show that passing input points through a simple Fourier feature mapping enables a multilayer perceptron (MLP) to learn high-frequency functions in low-dimensional problem domains.

NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis

22 code implementations ECCV 2020 Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, Ren Ng

Our algorithm represents a scene using a fully-connected (non-convolutional) deep network, whose input is a single continuous 5D coordinate (spatial location $(x, y, z)$ and viewing direction $(\theta, \phi)$) and whose output is the volume density and view-dependent emitted radiance at that spatial location.

Neural Rendering Novel View Synthesis

Lighthouse: Predicting Lighting Volumes for Spatially-Coherent Illumination

1 code implementation CVPR 2020 Pratul P. Srinivasan, Ben Mildenhall, Matthew Tancik, Jonathan T. Barron, Richard Tucker, Noah Snavely

We present a deep learning solution for estimating the incident illumination at any 3D location within a scene from an input narrow-baseline stereo image pair.

StegaStamp: Invisible Hyperlinks in Physical Photographs

2 code implementations CVPR 2020 Matthew Tancik, Ben Mildenhall, Ren Ng

Printed and digitally displayed photos have the ability to hide imperceptible digital data that can be accessed through internet-connected imaging systems.


Flash Photography for Data-Driven Hidden Scene Recovery

no code implementations27 Oct 2018 Matthew Tancik, Guy Satat, Ramesh Raskar

The method is able to localize 12cm wide hidden objects in 2D with 1. 7cm accuracy.

Object Localization

Synthetically Trained Icon Proposals for Parsing and Summarizing Infographics

1 code implementation27 Jul 2018 Spandan Madan, Zoya Bylinskii, Matthew Tancik, Adrià Recasens, Kimberli Zhong, Sami Alsheikh, Hanspeter Pfister, Aude Oliva, Fredo Durand

While automatic text extraction works well on infographics, computer vision approaches trained on natural images fail to identify the stand-alone visual elements in infographics, or `icons'.

Synthetic Data Generation

Lensless Imaging with Compressive Ultrafast Sensing

no code implementations19 Oct 2016 Guy Satat, Matthew Tancik, Ramesh Raskar

Each sensor acquisition is encoded with a different illumination pattern and produces a time series where time is a function of the photon's origin in the scene.

Compressive Sensing Time Series

Cannot find the paper you are looking for? You can Submit a new open access paper.