1 code implementation • 10 Sep 2024 • Vickie Ye, RuiLong Li, Justin Kerr, Matias Turkulainen, Brent Yi, Zhuoyang Pan, Otto Seiskari, Jianbo Ye, Jeffrey Hu, Matthew Tancik, Angjoo Kanazawa
gsplat is an open-source library designed for training and developing Gaussian Splatting methods.
1 code implementation • CVPR 2024 • Chung Min Kim, Mingxuan Wu, Justin Kerr, Ken Goldberg, Matthew Tancik, Angjoo Kanazawa
We optimize this field from a set of 2D masks provided by Segment Anything (SAM) in a way that respects coarse-to-fine hierarchy, using scale to consistently fuse conflicting masks from different viewpoints.
no code implementations • ICCV 2023 • RuiLong Li, Hang Gao, Matthew Tancik, Angjoo Kanazawa
Optimizing and rendering Neural Radiance Fields is computationally expensive due to the vast number of samples required by volume rendering.
1 code implementation • ICCV 2023 • Frederik Warburg, Ethan Weber, Matthew Tancik, Aleksander Holynski, Angjoo Kanazawa
Casually captured Neural Radiance Fields (NeRFs) suffer from artifacts such as floaters or flawed geometry when rendered outside the camera trajectory.
1 code implementation • ICCV 2023 • Ayaan Haque, Matthew Tancik, Alexei A. Efros, Aleksander Holynski, Angjoo Kanazawa
We propose a method for editing NeRF scenes with text-instructions.
5 code implementations • ICCV 2023 • Justin Kerr, Chung Min Kim, Ken Goldberg, Angjoo Kanazawa, Matthew Tancik
Humans describe the physical world using natural language to refer to specific 3D locations based on a vast range of properties: visual appearance, semantics, abstract associations, or actionable affordances.
2 code implementations • 8 Feb 2023 • Matthew Tancik, Ethan Weber, Evonne Ng, RuiLong Li, Brent Yi, Justin Kerr, Terrance Wang, Alexander Kristoffersen, Jake Austin, Kamyar Salahi, Abhik Ahuja, David McAllister, Angjoo Kanazawa
Neural Radiance Fields (NeRF) are a rapidly growing area of research with wide-ranging applications in computer vision, graphics, robotics, and more.
1 code implementation • 10 Oct 2022 • RuiLong Li, Matthew Tancik, Angjoo Kanazawa
We propose NerfAcc, a toolbox for efficient volumetric rendering of radiance fields.
no code implementations • 28 Jul 2022 • Georgios Pavlakos, Ethan Weber, Matthew Tancik, Angjoo Kanazawa
TV shows depict a wide variety of human behaviors and have been studied extensively for their potential to be a rich source of data for many applications.
2 code implementations • CVPR 2022 • Matthew Tancik, Vincent Casser, Xinchen Yan, Sabeek Pradhan, Ben Mildenhall, Pratul P. Srinivasan, Jonathan T. Barron, Henrik Kretzschmar
We present Block-NeRF, a variant of Neural Radiance Fields that can represent large-scale environments.
4 code implementations • CVPR 2022 • Alex Yu, Sara Fridovich-Keil, Matthew Tancik, Qinhong Chen, Benjamin Recht, Angjoo Kanazawa
We introduce Plenoxels (plenoptic voxels), a system for photorealistic view synthesis.
2 code implementations • ICCV 2021 • Ajay Jain, Matthew Tancik, Pieter Abbeel
We present DietNeRF, a 3D neural scene representation estimated from a few images.
5 code implementations • ICCV 2021 • Alex Yu, RuiLong Li, Matthew Tancik, Hao Li, Ren Ng, Angjoo Kanazawa
We introduce a method to render Neural Radiance Fields (NeRFs) in real time using PlenOctrees, an octree-based 3D representation which supports view-dependent effects.
4 code implementations • ICCV 2021 • Jonathan T. Barron, Ben Mildenhall, Matthew Tancik, Peter Hedman, Ricardo Martin-Brualla, Pratul P. Srinivasan
Mip-NeRF is also able to match the accuracy of a brute-force supersampled NeRF on our multiscale dataset while being 22x faster.
no code implementations • CVPR 2021 • Pratul P. Srinivasan, Boyang Deng, Xiuming Zhang, Matthew Tancik, Ben Mildenhall, Jonathan T. Barron
We present a method that takes as input a set of images of a scene illuminated by unconstrained known lighting, and produces as output a 3D representation that can be rendered from novel viewpoints under arbitrary lighting conditions.
2 code implementations • CVPR 2021 • Alex Yu, Vickie Ye, Matthew Tancik, Angjoo Kanazawa
This allows the network to be trained across multiple scenes to learn a scene prior, enabling it to perform novel view synthesis in a feed-forward manner from a sparse set of views (as few as one).
Ranked #2 on Generalizable Novel View Synthesis on NERDS 360
3 code implementations • CVPR 2021 • Matthew Tancik, Ben Mildenhall, Terrance Wang, Divi Schmidt, Pratul P. Srinivasan, Jonathan T. Barron, Ren Ng
Coordinate-based neural representations have shown significant promise as an alternative to discrete, array-based representations for complex low dimensional signals.
13 code implementations • NeurIPS 2020 • Matthew Tancik, Pratul P. Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan T. Barron, Ren Ng
We show that passing input points through a simple Fourier feature mapping enables a multilayer perceptron (MLP) to learn high-frequency functions in low-dimensional problem domains.
37 code implementations • ECCV 2020 • Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, Ren Ng
Our algorithm represents a scene using a fully-connected (non-convolutional) deep network, whose input is a single continuous 5D coordinate (spatial location $(x, y, z)$ and viewing direction $(\theta, \phi)$) and whose output is the volume density and view-dependent emitted radiance at that spatial location.
Ranked #3 on Generalizable Novel View Synthesis on NERDS 360
Generalizable Novel View Synthesis Low-Dose X-Ray Ct Reconstruction +2
1 code implementation • CVPR 2020 • Pratul P. Srinivasan, Ben Mildenhall, Matthew Tancik, Jonathan T. Barron, Richard Tucker, Noah Snavely
We present a deep learning solution for estimating the incident illumination at any 3D location within a scene from an input narrow-baseline stereo image pair.
3 code implementations • CVPR 2020 • Matthew Tancik, Ben Mildenhall, Ren Ng
Printed and digitally displayed photos have the ability to hide imperceptible digital data that can be accessed through internet-connected imaging systems.
no code implementations • 27 Oct 2018 • Matthew Tancik, Guy Satat, Ramesh Raskar
The method is able to localize 12cm wide hidden objects in 2D with 1. 7cm accuracy.
1 code implementation • 27 Jul 2018 • Spandan Madan, Zoya Bylinskii, Matthew Tancik, Adrià Recasens, Kimberli Zhong, Sami Alsheikh, Hanspeter Pfister, Aude Oliva, Fredo Durand
While automatic text extraction works well on infographics, computer vision approaches trained on natural images fail to identify the stand-alone visual elements in infographics, or `icons'.
no code implementations • 19 Oct 2016 • Guy Satat, Matthew Tancik, Ramesh Raskar
Each sensor acquisition is encoded with a different illumination pattern and produces a time series where time is a function of the photon's origin in the scene.