Search Results for author: Peter Hedman

Found 17 papers, 10 papers with code

Deep Blending for Free-Viewpoint Image-Based-Rendering

1 code implementation SIGGRAPH Asia 2018 2018 Peter Hedman, Julien Philip, True Price, Jan-Michael Frahm, George Drettakis, Gabriel Brostow

We present a new deep learning approach to blending for IBR, in which we use held-out real image data to learn blending weights to combine input photo contributions.

Novel View Synthesis

Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields

4 code implementations ICCV 2021 Jonathan T. Barron, Ben Mildenhall, Matthew Tancik, Peter Hedman, Ricardo Martin-Brualla, Pratul P. Srinivasan

Mip-NeRF is also able to match the accuracy of a brute-force supersampled NeRF on our multiscale dataset while being 22x faster.

Baking Neural Radiance Fields for Real-Time View Synthesis

1 code implementation ICCV 2021 Peter Hedman, Pratul P. Srinivasan, Ben Mildenhall, Jonathan T. Barron, Paul Debevec

Neural volumetric representations such as Neural Radiance Fields (NeRF) have emerged as a compelling technique for learning to represent 3D scenes from images with the goal of rendering photorealistic images of the scene from unobserved viewpoints.

HyperNeRF: A Higher-Dimensional Representation for Topologically Varying Neural Radiance Fields

2 code implementations24 Jun 2021 Keunhong Park, Utkarsh Sinha, Peter Hedman, Jonathan T. Barron, Sofien Bouaziz, Dan B Goldman, Ricardo Martin-Brualla, Steven M. Seitz

A common approach to reconstruct such non-rigid scenes is through the use of a learned deformation field mapping from coordinates in each input image into a canonical template coordinate space.

Novel View Synthesis

Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields

1 code implementation CVPR 2022 Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul P. Srinivasan, Peter Hedman

Though neural radiance fields (NeRF) have demonstrated impressive view synthesis results on objects and small bounded regions of space, they struggle on "unbounded" scenes, where the camera may point in any direction and content may exist at any distance.

Ref-NeRF: Structured View-Dependent Appearance for Neural Radiance Fields

2 code implementations CVPR 2022 Dor Verbin, Peter Hedman, Ben Mildenhall, Todd Zickler, Jonathan T. Barron, Pratul P. Srinivasan

Neural Radiance Fields (NeRF) is a popular view synthesis technique that represents a scene as a continuous volumetric function, parameterized by multilayer perceptrons that provide the volume density and view-dependent emitted radiance at each location.

MERF: Memory-Efficient Radiance Fields for Real-time View Synthesis in Unbounded Scenes

no code implementations23 Feb 2023 Christian Reiser, Richard Szeliski, Dor Verbin, Pratul P. Srinivasan, Ben Mildenhall, Andreas Geiger, Jonathan T. Barron, Peter Hedman

We design a lossless procedure for baking the parameterization used during training into a model that achieves real-time rendering while still preserving the photorealistic view synthesis quality of a volumetric radiance field.

BakedSDF: Meshing Neural SDFs for Real-Time View Synthesis

no code implementations28 Feb 2023 Lior Yariv, Peter Hedman, Christian Reiser, Dor Verbin, Pratul P. Srinivasan, Richard Szeliski, Jonathan T. Barron, Ben Mildenhall

We present a method for reconstructing high-quality meshes of large unbounded real-world scenes suitable for photorealistic novel view synthesis.

Novel View Synthesis

Vox-E: Text-guided Voxel Editing of 3D Objects

1 code implementation ICCV 2023 Etai Sella, Gal Fiebelman, Peter Hedman, Hadar Averbuch-Elor

Our method takes oriented 2D images of a 3D object as input and learns a grid-based volumetric representation of it.

3D Object Editing Text to 3D

Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields

1 code implementation ICCV 2023 Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul P. Srinivasan, Peter Hedman

Neural Radiance Field training can be accelerated through the use of grid-based representations in NeRF's learned mapping from spatial coordinates to colors and volumetric density.

Novel View Synthesis

Eclipse: Disambiguating Illumination and Materials using Unintended Shadows

no code implementations25 May 2023 Dor Verbin, Ben Mildenhall, Peter Hedman, Jonathan T. Barron, Todd Zickler, Pratul P. Srinivasan

We present a method based on differentiable Monte Carlo ray tracing that uses images of an object to jointly recover its spatially-varying materials, the surrounding illumination environment, and the shapes of the unseen light occluders who inadvertently cast shadows upon it.

Inverse Rendering

SMERF: Streamable Memory Efficient Radiance Fields for Real-Time Large-Scene Exploration

no code implementations12 Dec 2023 Daniel Duckworth, Peter Hedman, Christian Reiser, Peter Zhizhin, Jean-François Thibert, Mario Lučić, Richard Szeliski, Jonathan T. Barron

Recent techniques for real-time view synthesis have rapidly advanced in fidelity and speed, and modern methods are capable of rendering near-photorealistic scenes at interactive frame rates.

Novel View Synthesis

Binary Opacity Grids: Capturing Fine Geometric Detail for Mesh-Based View Synthesis

no code implementations19 Feb 2024 Christian Reiser, Stephan Garbin, Pratul P. Srinivasan, Dor Verbin, Richard Szeliski, Ben Mildenhall, Jonathan T. Barron, Peter Hedman, Andreas Geiger

Third, we minimize the binary entropy of the opacity values, which facilitates the extraction of surface geometry by encouraging opacity values to binarize towards the end of training.

Cannot find the paper you are looking for? You can Submit a new open access paper.