Search Results for author: Matthew Tancik

Found 24 papers, 19 papers with code

gsplat: An Open-Source Library for Gaussian Splatting

1 code implementation10 Sep 2024 Vickie Ye, RuiLong Li, Justin Kerr, Matias Turkulainen, Brent Yi, Zhuoyang Pan, Otto Seiskari, Jianbo Ye, Jeffrey Hu, Matthew Tancik, Angjoo Kanazawa

gsplat is an open-source library designed for training and developing Gaussian Splatting methods.

GARField: Group Anything with Radiance Fields

1 code implementation CVPR 2024 Chung Min Kim, Mingxuan Wu, Justin Kerr, Ken Goldberg, Matthew Tancik, Angjoo Kanazawa

We optimize this field from a set of 2D masks provided by Segment Anything (SAM) in a way that respects coarse-to-fine hierarchy, using scale to consistently fuse conflicting masks from different viewpoints.

Scene Understanding

NerfAcc: Efficient Sampling Accelerates NeRFs

no code implementations ICCV 2023 RuiLong Li, Hang Gao, Matthew Tancik, Angjoo Kanazawa

Optimizing and rendering Neural Radiance Fields is computationally expensive due to the vast number of samples required by volume rendering.

Nerfbusters: Removing Ghostly Artifacts from Casually Captured NeRFs

1 code implementation ICCV 2023 Frederik Warburg, Ethan Weber, Matthew Tancik, Aleksander Holynski, Angjoo Kanazawa

Casually captured Neural Radiance Fields (NeRFs) suffer from artifacts such as floaters or flawed geometry when rendered outside the camera trajectory.

Novel View Synthesis

LERF: Language Embedded Radiance Fields

5 code implementations ICCV 2023 Justin Kerr, Chung Min Kim, Ken Goldberg, Angjoo Kanazawa, Matthew Tancik

Humans describe the physical world using natural language to refer to specific 3D locations based on a vast range of properties: visual appearance, semantics, abstract associations, or actionable affordances.

Nerfstudio: A Modular Framework for Neural Radiance Field Development

2 code implementations8 Feb 2023 Matthew Tancik, Ethan Weber, Evonne Ng, RuiLong Li, Brent Yi, Justin Kerr, Terrance Wang, Alexander Kristoffersen, Jake Austin, Kamyar Salahi, Abhik Ahuja, David McAllister, Angjoo Kanazawa

Neural Radiance Fields (NeRF) are a rapidly growing area of research with wide-ranging applications in computer vision, graphics, robotics, and more.

NerfAcc: A General NeRF Acceleration Toolbox

1 code implementation10 Oct 2022 RuiLong Li, Matthew Tancik, Angjoo Kanazawa

We propose NerfAcc, a toolbox for efficient volumetric rendering of radiance fields.

The One Where They Reconstructed 3D Humans and Environments in TV Shows

no code implementations28 Jul 2022 Georgios Pavlakos, Ethan Weber, Matthew Tancik, Angjoo Kanazawa

TV shows depict a wide variety of human behaviors and have been studied extensively for their potential to be a rich source of data for many applications.

3D Reconstruction Gaze Estimation

PlenOctrees for Real-time Rendering of Neural Radiance Fields

5 code implementations ICCV 2021 Alex Yu, RuiLong Li, Matthew Tancik, Hao Li, Ren Ng, Angjoo Kanazawa

We introduce a method to render Neural Radiance Fields (NeRFs) in real time using PlenOctrees, an octree-based 3D representation which supports view-dependent effects.

Neural Rendering Novel View Synthesis

Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields

4 code implementations ICCV 2021 Jonathan T. Barron, Ben Mildenhall, Matthew Tancik, Peter Hedman, Ricardo Martin-Brualla, Pratul P. Srinivasan

Mip-NeRF is also able to match the accuracy of a brute-force supersampled NeRF on our multiscale dataset while being 22x faster.

NeRV: Neural Reflectance and Visibility Fields for Relighting and View Synthesis

no code implementations CVPR 2021 Pratul P. Srinivasan, Boyang Deng, Xiuming Zhang, Matthew Tancik, Ben Mildenhall, Jonathan T. Barron

We present a method that takes as input a set of images of a scene illuminated by unconstrained known lighting, and produces as output a 3D representation that can be rendered from novel viewpoints under arbitrary lighting conditions.

pixelNeRF: Neural Radiance Fields from One or Few Images

2 code implementations CVPR 2021 Alex Yu, Vickie Ye, Matthew Tancik, Angjoo Kanazawa

This allows the network to be trained across multiple scenes to learn a scene prior, enabling it to perform novel view synthesis in a feed-forward manner from a sparse set of views (as few as one).

3D Reconstruction Generalizable Novel View Synthesis +1

Learned Initializations for Optimizing Coordinate-Based Neural Representations

3 code implementations CVPR 2021 Matthew Tancik, Ben Mildenhall, Terrance Wang, Divi Schmidt, Pratul P. Srinivasan, Jonathan T. Barron, Ren Ng

Coordinate-based neural representations have shown significant promise as an alternative to discrete, array-based representations for complex low dimensional signals.

Meta-Learning

Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains

13 code implementations NeurIPS 2020 Matthew Tancik, Pratul P. Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan T. Barron, Ren Ng

We show that passing input points through a simple Fourier feature mapping enables a multilayer perceptron (MLP) to learn high-frequency functions in low-dimensional problem domains.

NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis

37 code implementations ECCV 2020 Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, Ren Ng

Our algorithm represents a scene using a fully-connected (non-convolutional) deep network, whose input is a single continuous 5D coordinate (spatial location $(x, y, z)$ and viewing direction $(\theta, \phi)$) and whose output is the volume density and view-dependent emitted radiance at that spatial location.

Generalizable Novel View Synthesis Low-Dose X-Ray Ct Reconstruction +2

Lighthouse: Predicting Lighting Volumes for Spatially-Coherent Illumination

1 code implementation CVPR 2020 Pratul P. Srinivasan, Ben Mildenhall, Matthew Tancik, Jonathan T. Barron, Richard Tucker, Noah Snavely

We present a deep learning solution for estimating the incident illumination at any 3D location within a scene from an input narrow-baseline stereo image pair.

Lighting Estimation

StegaStamp: Invisible Hyperlinks in Physical Photographs

3 code implementations CVPR 2020 Matthew Tancik, Ben Mildenhall, Ren Ng

Printed and digitally displayed photos have the ability to hide imperceptible digital data that can be accessed through internet-connected imaging systems.

Steganographics

Flash Photography for Data-Driven Hidden Scene Recovery

no code implementations27 Oct 2018 Matthew Tancik, Guy Satat, Ramesh Raskar

The method is able to localize 12cm wide hidden objects in 2D with 1. 7cm accuracy.

Object Object Localization

Synthetically Trained Icon Proposals for Parsing and Summarizing Infographics

1 code implementation27 Jul 2018 Spandan Madan, Zoya Bylinskii, Matthew Tancik, Adrià Recasens, Kimberli Zhong, Sami Alsheikh, Hanspeter Pfister, Aude Oliva, Fredo Durand

While automatic text extraction works well on infographics, computer vision approaches trained on natural images fail to identify the stand-alone visual elements in infographics, or `icons'.

Synthetic Data Generation

Lensless Imaging with Compressive Ultrafast Sensing

no code implementations19 Oct 2016 Guy Satat, Matthew Tancik, Ramesh Raskar

Each sensor acquisition is encoded with a different illumination pattern and produces a time series where time is a function of the photon's origin in the scene.

Compressive Sensing Time Series +1

Cannot find the paper you are looking for? You can Submit a new open access paper.