Search Results for author: Vincent Sitzmann

Found 27 papers, 16 papers with code

Implicit Neural Representations with Periodic Activation Functions

24 code implementations NeurIPS 2020 Vincent Sitzmann, Julien N. P. Martel, Alexander W. Bergman, David B. Lindell, Gordon Wetzstein

However, current network architectures for such implicit neural representations are incapable of modeling signals with fine detail, and fail to represent a signal's spatial and temporal derivatives, despite the fact that these are essential to many physical signals defined implicitly as the solution to partial differential equations.

Image Inpainting

Neural Fields in Visual Computing and Beyond

1 code implementation22 Nov 2021 Yiheng Xie, Towaki Takikawa, Shunsuke Saito, Or Litany, Shiqin Yan, Numair Khan, Federico Tombari, James Tompkin, Vincent Sitzmann, Srinath Sridhar

Recent advances in machine learning have created increasing interest in solving visual computing problems using a class of coordinate-based neural networks that parametrize physical properties of scenes or objects across space and time.

3D Reconstruction Image Animation +1

pixelSplat: 3D Gaussian Splats from Image Pairs for Scalable Generalizable 3D Reconstruction

1 code implementation19 Dec 2023 David Charatan, Sizhe Li, Andrea Tagliasacchi, Vincent Sitzmann

We introduce pixelSplat, a feed-forward model that learns to reconstruct 3D radiance fields parameterized by 3D Gaussian primitives from pairs of images.

3D Reconstruction Generalizable Novel View Synthesis +1

DeepVoxels: Learning Persistent 3D Feature Embeddings

1 code implementation CVPR 2019 Vincent Sitzmann, Justus Thies, Felix Heide, Matthias Nießner, Gordon Wetzstein, Michael Zollhöfer

In this work, we address the lack of 3D understanding of generative neural networks by introducing a persistent 3D feature embedding for view synthesis.

3D Reconstruction Novel View Synthesis

Decomposing NeRF for Editing via Feature Field Distillation

1 code implementation31 May 2022 Sosuke Kobayashi, Eiichi Matsumoto, Vincent Sitzmann

Emerging neural radiance fields (NeRF) are a promising scene representation for computer graphics, enabling high-quality 3D reconstruction and novel view synthesis from image observations.

3D Reconstruction Novel View Synthesis

Light Field Networks: Neural Scene Representations with Single-Evaluation Rendering

1 code implementation NeurIPS 2021 Vincent Sitzmann, Semon Rezchikov, William T. Freeman, Joshua B. Tenenbaum, Fredo Durand

In this work, we propose a novel neural scene representation, Light Field Networks or LFNs, which represent both geometry and appearance of the underlying 3D scene in a 360-degree, four-dimensional light field parameterized via a neural implicit representation.

Meta-Learning Scene Understanding

MetaSDF: Meta-learning Signed Distance Functions

2 code implementations NeurIPS 2020 Vincent Sitzmann, Eric R. Chan, Richard Tucker, Noah Snavely, Gordon Wetzstein

Neural implicit shape representations are an emerging paradigm that offers many potential benefits over conventional discrete representations, including memory efficiency at a high spatial resolution.

Meta-Learning

Learning to Render Novel Views from Wide-Baseline Stereo Pairs

1 code implementation CVPR 2023 Yilun Du, Cameron Smith, Ayush Tewari, Vincent Sitzmann

We conduct extensive comparisons on held-out test scenes across two real-world datasets, significantly outperforming prior work on novel view synthesis from sparse image observations and achieving multi-view-consistent novel view synthesis.

Novel View Synthesis

Intrinsic Image Diffusion for Indoor Single-view Material Estimation

1 code implementation19 Dec 2023 Peter Kocsis, Vincent Sitzmann, Matthias Nießner

We present Intrinsic Image Diffusion, a generative model for appearance decomposition of indoor scenes.

Dirty Pixels: Towards End-to-End Image Processing and Perception

1 code implementation23 Jan 2017 Steven Diamond, Vincent Sitzmann, Frank Julca-Aguilar, Stephen Boyd, Gordon Wetzstein, Felix Heide

As such, conventional imaging involves processing the RAW sensor measurements in a sequential pipeline of steps, such as demosaicking, denoising, deblurring, tone-mapping and compression.

Autonomous Driving Deblurring +10

DittoGym: Learning to Control Soft Shape-Shifting Robots

2 code implementations24 Jan 2024 Suning Huang, Boyuan Chen, Huazhe Xu, Vincent Sitzmann

Inspired by nature and recent novel robot designs, we propose to go a step further and explore the novel reconfigurable robots, defined as robots that can change their morphology within their lifetime.

Reinforcement Learning (RL)

Unrolled Optimization with Deep Priors

2 code implementations22 May 2017 Steven Diamond, Vincent Sitzmann, Felix Heide, Gordon Wetzstein

A broad class of problems at the core of computational imaging, sensing, and low-level computer vision reduces to the inverse problem of extracting latent images that follow a prior distribution, from measurements taken under a known physical image formation model.

Deblurring Denoising

Neural Descriptor Fields: SE(3)-Equivariant Object Representations for Manipulation

1 code implementation9 Dec 2021 Anthony Simeonov, Yilun Du, Andrea Tagliasacchi, Joshua B. Tenenbaum, Alberto Rodriguez, Pulkit Agrawal, Vincent Sitzmann

Our performance generalizes across both object instances and 6-DoF object poses, and significantly outperforms a recent baseline that relies on 2D descriptors.

Object

How do people explore virtual environments?

no code implementations13 Dec 2016 Vincent Sitzmann, Ana Serrano, Amy Pavel, Maneesh Agrawala, Diego Gutierrez, Belen Masia, Gordon Wetzstein

Understanding how people explore immersive virtual environments is crucial for many applications, such as designing virtual reality (VR) content, developing new compression algorithms, or learning computational models of saliency or visual attention.

Semantic Implicit Neural Scene Representations With Semi-Supervised Training

no code implementations28 Mar 2020 Amit Kohli, Vincent Sitzmann, Gordon Wetzstein

The recent success of implicit neural scene representations has presented a viable new method for how we capture and store 3D scenes.

3D Semantic Segmentation Representation Learning +1

State of the Art on Neural Rendering

no code implementations8 Apr 2020 Ayush Tewari, Ohad Fried, Justus Thies, Vincent Sitzmann, Stephen Lombardi, Kalyan Sunkavalli, Ricardo Martin-Brualla, Tomas Simon, Jason Saragih, Matthias Nießner, Rohit Pandey, Sean Fanello, Gordon Wetzstein, Jun-Yan Zhu, Christian Theobalt, Maneesh Agrawala, Eli Shechtman, Dan B. Goldman, Michael Zollhöfer

Neural rendering is a new and rapidly emerging field that combines generative machine learning techniques with physical knowledge from computer graphics, e. g., by the integration of differentiable rendering into network training.

BIG-bench Machine Learning Image Generation +2

Deep Medial Fields

no code implementations7 Jun 2021 Daniel Rebain, Ke Li, Vincent Sitzmann, Soroosh Yazdani, Kwang Moo Yi, Andrea Tagliasacchi

Implicit representations of geometry, such as occupancy fields or signed distance fields (SDF), have recently re-gained popularity in encoding 3D solid shape in a functional form.

Learning Signal-Agnostic Manifolds of Neural Fields

no code implementations NeurIPS 2021 Yilun Du, Katherine M. Collins, Joshua B. Tenenbaum, Vincent Sitzmann

We leverage neural fields to capture the underlying structure in image, shape, audio and cross-modal audiovisual domains in a modality-independent manner.

Unsupervised Discovery and Composition of Object Light Fields

no code implementations8 May 2022 Cameron Smith, Hong-Xing Yu, Sergey Zakharov, Fredo Durand, Joshua B. Tenenbaum, Jiajun Wu, Vincent Sitzmann

Neural scene representations, both continuous and discrete, have recently emerged as a powerful new paradigm for 3D scene understanding.

Novel View Synthesis Object +1

Neural Groundplans: Persistent Neural Scene Representations from a Single Image

no code implementations22 Jul 2022 Prafull Sharma, Ayush Tewari, Yilun Du, Sergey Zakharov, Rares Ambrus, Adrien Gaidon, William T. Freeman, Fredo Durand, Joshua B. Tenenbaum, Vincent Sitzmann

We present a method to map 2D image observations of a scene to a persistent 3D scene representation, enabling novel view synthesis and disentangled representation of the movable and immovable components of the scene.

Disentanglement Instance Segmentation +4

DeLiRa: Self-Supervised Depth, Light, and Radiance Fields

no code implementations ICCV 2023 Vitor Guizilini, Igor Vasiljevic, Jiading Fang, Rares Ambrus, Sergey Zakharov, Vincent Sitzmann, Adrien Gaidon

In this work, we propose to use the multi-view photometric objective from the self-supervised depth estimation literature as a geometric regularizer for volumetric rendering, significantly improving novel view synthesis without requiring additional information.

3D Reconstruction Depth Estimation +1

Approaching human 3D shape perception with neurally mappable models

no code implementations22 Aug 2023 Thomas P. O'Connell, Tyler Bonnen, Yoni Friedman, Ayush Tewari, Josh B. Tenenbaum, Vincent Sitzmann, Nancy Kanwisher

Finally, we find that while the models trained with multi-view learning objectives are able to partially generalize to new object categories, they fall short of human alignment.

MULTI-VIEW LEARNING

Variational Barycentric Coordinates

no code implementations5 Oct 2023 Ana Dodik, Oded Stein, Vincent Sitzmann, Justin Solomon

We propose a variational technique to optimize for generalized barycentric coordinates that offers additional control compared to existing models.

valid

Cannot find the paper you are looking for? You can Submit a new open access paper.