Search Results for author: Vincent Sitzmann

Found 18 papers, 8 papers with code

Unsupervised Discovery and Composition of Object Light Fields

no code implementations8 May 2022 Cameron Smith, Hong-Xing Yu, Sergey Zakharov, Fredo Durand, Joshua B. Tenenbaum, Jiajun Wu, Vincent Sitzmann

Neural scene representations, both continuous and discrete, have recently emerged as a powerful new paradigm for 3D scene understanding.

Novel View Synthesis Scene Understanding

Neural Descriptor Fields: SE(3)-Equivariant Object Representations for Manipulation

no code implementations9 Dec 2021 Anthony Simeonov, Yilun Du, Andrea Tagliasacchi, Joshua B. Tenenbaum, Alberto Rodriguez, Pulkit Agrawal, Vincent Sitzmann

Our performance generalizes across both object instances and 6-DoF object poses, and significantly outperforms a recent baseline that relies on 2D descriptors.

Neural Fields in Visual Computing and Beyond

no code implementations22 Nov 2021 Yiheng Xie, Towaki Takikawa, Shunsuke Saito, Or Litany, Shiqin Yan, Numair Khan, Federico Tombari, James Tompkin, Vincent Sitzmann, Srinath Sridhar

Recent advances in machine learning have created increasing interest in solving visual computing problems using a class of coordinate-based neural networks that parametrize physical properties of scenes or objects across space and time.

3D Reconstruction Image Animation +1

Learning Signal-Agnostic Manifolds of Neural Fields

no code implementations NeurIPS 2021 Yilun Du, Katherine M. Collins, Joshua B. Tenenbaum, Vincent Sitzmann

We leverage neural fields to capture the underlying structure in image, shape, audio and cross-modal audiovisual domains in a modality-independent manner.

Deep Medial Fields

no code implementations7 Jun 2021 Daniel Rebain, Ke Li, Vincent Sitzmann, Soroosh Yazdani, Kwang Moo Yi, Andrea Tagliasacchi

Implicit representations of geometry, such as occupancy fields or signed distance fields (SDF), have recently re-gained popularity in encoding 3D solid shape in a functional form.

Light Field Networks: Neural Scene Representations with Single-Evaluation Rendering

1 code implementation NeurIPS 2021 Vincent Sitzmann, Semon Rezchikov, William T. Freeman, Joshua B. Tenenbaum, Fredo Durand

In this work, we propose a novel neural scene representation, Light Field Networks or LFNs, which represent both geometry and appearance of the underlying 3D scene in a 360-degree, four-dimensional light field parameterized via a neural implicit representation.

Meta-Learning Scene Understanding

MetaSDF: Meta-learning Signed Distance Functions

2 code implementations NeurIPS 2020 Vincent Sitzmann, Eric R. Chan, Richard Tucker, Noah Snavely, Gordon Wetzstein

Neural implicit shape representations are an emerging paradigm that offers many potential benefits over conventional discrete representations, including memory efficiency at a high spatial resolution.

Meta-Learning

Implicit Neural Representations with Periodic Activation Functions

16 code implementations NeurIPS 2020 Vincent Sitzmann, Julien N. P. Martel, Alexander W. Bergman, David B. Lindell, Gordon Wetzstein

However, current network architectures for such implicit neural representations are incapable of modeling signals with fine detail, and fail to represent a signal's spatial and temporal derivatives, despite the fact that these are essential to many physical signals defined implicitly as the solution to partial differential equations.

Image Inpainting

State of the Art on Neural Rendering

no code implementations8 Apr 2020 Ayush Tewari, Ohad Fried, Justus Thies, Vincent Sitzmann, Stephen Lombardi, Kalyan Sunkavalli, Ricardo Martin-Brualla, Tomas Simon, Jason Saragih, Matthias Nießner, Rohit Pandey, Sean Fanello, Gordon Wetzstein, Jun-Yan Zhu, Christian Theobalt, Maneesh Agrawala, Eli Shechtman, Dan B. Goldman, Michael Zollhöfer

Neural rendering is a new and rapidly emerging field that combines generative machine learning techniques with physical knowledge from computer graphics, e. g., by the integration of differentiable rendering into network training.

Image Generation Neural Rendering +1

Semantic Implicit Neural Scene Representations With Semi-Supervised Training

no code implementations28 Mar 2020 Amit Kohli, Vincent Sitzmann, Gordon Wetzstein

The recent success of implicit neural scene representations has presented a viable new method for how we capture and store 3D scenes.

3D Semantic Segmentation Representation Learning

DeepVoxels: Learning Persistent 3D Feature Embeddings

1 code implementation CVPR 2019 Vincent Sitzmann, Justus Thies, Felix Heide, Matthias Nießner, Gordon Wetzstein, Michael Zollhöfer

In this work, we address the lack of 3D understanding of generative neural networks by introducing a persistent 3D feature embedding for view synthesis.

3D Reconstruction Novel View Synthesis

Unrolled Optimization with Deep Priors

2 code implementations22 May 2017 Steven Diamond, Vincent Sitzmann, Felix Heide, Gordon Wetzstein

A broad class of problems at the core of computational imaging, sensing, and low-level computer vision reduces to the inverse problem of extracting latent images that follow a prior distribution, from measurements taken under a known physical image formation model.

Deblurring Denoising

Dirty Pixels: Towards End-to-End Image Processing and Perception

1 code implementation23 Jan 2017 Steven Diamond, Vincent Sitzmann, Frank Julca-Aguilar, Stephen Boyd, Gordon Wetzstein, Felix Heide

As such, conventional imaging involves processing the RAW sensor measurements in a sequential pipeline of steps, such as demosaicking, denoising, deblurring, tone-mapping and compression.

Autonomous Driving Deblurring +9

How do people explore virtual environments?

no code implementations13 Dec 2016 Vincent Sitzmann, Ana Serrano, Amy Pavel, Maneesh Agrawala, Diego Gutierrez, Belen Masia, Gordon Wetzstein

Understanding how people explore immersive virtual environments is crucial for many applications, such as designing virtual reality (VR) content, developing new compression algorithms, or learning computational models of saliency or visual attention.

Cannot find the paper you are looking for? You can Submit a new open access paper.