Search Results for author: Ben Mildenhall

Found 32 papers, 18 papers with code

CamP: Camera Preconditioning for Neural Radiance Fields

no code implementations21 Aug 2023 Keunhong Park, Philipp Henzler, Ben Mildenhall, Jonathan T. Barron, Ricardo Martin-Brualla

We propose using a proxy problem to compute a whitening transform that eliminates the correlation between camera parameters and normalizes their effects, and we propose to use this transform as a preconditioner for the camera parameters during joint optimization.

Eclipse: Disambiguating Illumination and Materials using Unintended Shadows

no code implementations25 May 2023 Dor Verbin, Ben Mildenhall, Peter Hedman, Jonathan T. Barron, Todd Zickler, Pratul P. Srinivasan

We present a method based on differentiable Monte Carlo ray tracing that uses images of an object to jointly recover its spatially-varying materials, the surrounding illumination environment, and the shapes of the unseen light occluders who inadvertently cast shadows upon it.

Inverse Rendering

Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields

1 code implementation13 Apr 2023 Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul P. Srinivasan, Peter Hedman

Neural Radiance Field training can be accelerated through the use of grid-based representations in NeRF's learned mapping from spatial coordinates to colors and volumetric density.

Novel View Synthesis

DreamBooth3D: Subject-Driven Text-to-3D Generation

no code implementations23 Mar 2023 Amit Raj, Srinivas Kaza, Ben Poole, Michael Niemeyer, Nataniel Ruiz, Ben Mildenhall, Shiran Zada, Kfir Aberman, Michael Rubinstein, Jonathan Barron, Yuanzhen Li, Varun Jampani

We present DreamBooth3D, an approach to personalize text-to-3D generative models from as few as 3-6 casually captured images of a subject.

Text to 3D

BakedSDF: Meshing Neural SDFs for Real-Time View Synthesis

no code implementations28 Feb 2023 Lior Yariv, Peter Hedman, Christian Reiser, Dor Verbin, Pratul P. Srinivasan, Richard Szeliski, Jonathan T. Barron, Ben Mildenhall

We present a method for reconstructing high-quality meshes of large unbounded real-world scenes suitable for photorealistic novel view synthesis.

Novel View Synthesis

MERF: Memory-Efficient Radiance Fields for Real-time View Synthesis in Unbounded Scenes

no code implementations23 Feb 2023 Christian Reiser, Richard Szeliski, Dor Verbin, Pratul P. Srinivasan, Ben Mildenhall, Andreas Geiger, Jonathan T. Barron, Peter Hedman

We design a lossless procedure for baking the parameterization used during training into a model that achieves real-time rendering while still preserving the photorealistic view synthesis quality of a volumetric radiance field.

DreamFusion: Text-to-3D using 2D Diffusion

4 code implementations29 Sep 2022 Ben Poole, Ajay Jain, Jonathan T. Barron, Ben Mildenhall

Using this loss in a DeepDream-like procedure, we optimize a randomly-initialized 3D model (a Neural Radiance Field, or NeRF) via gradient descent such that its 2D renderings from random angles achieve a low loss.

Denoising Image Generation +1

Volume Rendering Digest (for NeRF)

no code implementations29 Aug 2022 Andrea Tagliasacchi, Ben Mildenhall

Neural Radiance Fields employ simple volume rendering as a way to overcome the challenges of differentiating through ray-triangle intersections by leveraging a probabilistic notion of visibility.

Fast and High-Quality Image Denoising via Malleable Convolutions

no code implementations2 Jan 2022 Yifan Jiang, Bartlomiej Wronski, Ben Mildenhall, Jonathan T. Barron, Zhangyang Wang, Tianfan Xue

These spatially-varying kernels are produced by an efficient predictor network running on a downsampled input, making them much more efficient to compute than per-pixel kernels produced by a full-resolution image, and also enlarging the network's receptive field compared with static kernels.

Image Denoising Image Restoration +1

Ref-NeRF: Structured View-Dependent Appearance for Neural Radiance Fields

1 code implementation CVPR 2022 Dor Verbin, Peter Hedman, Ben Mildenhall, Todd Zickler, Jonathan T. Barron, Pratul P. Srinivasan

Neural Radiance Fields (NeRF) is a popular view synthesis technique that represents a scene as a continuous volumetric function, parameterized by multilayer perceptrons that provide the volume density and view-dependent emitted radiance at each location.

Zero-Shot Text-Guided Object Generation with Dream Fields

4 code implementations CVPR 2022 Ajay Jain, Ben Mildenhall, Jonathan T. Barron, Pieter Abbeel, Ben Poole

Our method, Dream Fields, can generate the geometry and color of a wide range of objects without 3D supervision.

Neural Rendering

RegNeRF: Regularizing Neural Radiance Fields for View Synthesis from Sparse Inputs

no code implementations CVPR 2022 Michael Niemeyer, Jonathan T. Barron, Ben Mildenhall, Mehdi S. M. Sajjadi, Andreas Geiger, Noha Radwan

We observe that the majority of artifacts in sparse input scenarios are caused by errors in the estimated scene geometry, and by divergent behavior at the start of training.

Novel View Synthesis

Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields

1 code implementation CVPR 2022 Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul P. Srinivasan, Peter Hedman

Though neural radiance fields (NeRF) have demonstrated impressive view synthesis results on objects and small bounded regions of space, they struggle on "unbounded" scenes, where the camera may point in any direction and content may exist at any distance.

Baking Neural Radiance Fields for Real-Time View Synthesis

1 code implementation ICCV 2021 Peter Hedman, Pratul P. Srinivasan, Ben Mildenhall, Jonathan T. Barron, Paul Debevec

Neural volumetric representations such as Neural Radiance Fields (NeRF) have emerged as a compelling technique for learning to represent 3D scenes from images with the goal of rendering photorealistic images of the scene from unobserved viewpoints.

Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields

4 code implementations ICCV 2021 Jonathan T. Barron, Ben Mildenhall, Matthew Tancik, Peter Hedman, Ricardo Martin-Brualla, Pratul P. Srinivasan

Mip-NeRF is also able to match the accuracy of a brute-force supersampled NeRF on our multiscale dataset while being 22x faster.

NeRV: Neural Reflectance and Visibility Fields for Relighting and View Synthesis

no code implementations CVPR 2021 Pratul P. Srinivasan, Boyang Deng, Xiuming Zhang, Matthew Tancik, Ben Mildenhall, Jonathan T. Barron

We present a method that takes as input a set of images of a scene illuminated by unconstrained known lighting, and produces as output a 3D representation that can be rendered from novel viewpoints under arbitrary lighting conditions.

Learned Initializations for Optimizing Coordinate-Based Neural Representations

3 code implementations CVPR 2021 Matthew Tancik, Ben Mildenhall, Terrance Wang, Divi Schmidt, Pratul P. Srinivasan, Jonathan T. Barron, Ren Ng

Coordinate-based neural representations have shown significant promise as an alternative to discrete, array-based representations for complex low dimensional signals.

Meta-Learning

Neural Reflectance Fields for Appearance Acquisition

no code implementations9 Aug 2020 Sai Bi, Zexiang Xu, Pratul Srinivasan, Ben Mildenhall, Kalyan Sunkavalli, Miloš Hašan, Yannick Hold-Geoffroy, David Kriegman, Ravi Ramamoorthi

We combine this representation with a physically-based differentiable ray marching framework that can render images from a neural reflectance field under any viewpoint and light.

Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains

11 code implementations NeurIPS 2020 Matthew Tancik, Pratul P. Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan T. Barron, Ren Ng

We show that passing input points through a simple Fourier feature mapping enables a multilayer perceptron (MLP) to learn high-frequency functions in low-dimensional problem domains.

NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis

37 code implementations ECCV 2020 Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, Ren Ng

Our algorithm represents a scene using a fully-connected (non-convolutional) deep network, whose input is a single continuous 5D coordinate (spatial location $(x, y, z)$ and viewing direction $(\theta, \phi)$) and whose output is the volume density and view-dependent emitted radiance at that spatial location.

Generalizable Novel View Synthesis Neural Rendering +1

Lighthouse: Predicting Lighting Volumes for Spatially-Coherent Illumination

1 code implementation CVPR 2020 Pratul P. Srinivasan, Ben Mildenhall, Matthew Tancik, Jonathan T. Barron, Richard Tucker, Noah Snavely

We present a deep learning solution for estimating the incident illumination at any 3D location within a scene from an input narrow-baseline stereo image pair.

Lighting Estimation

Local Light Field Fusion: Practical View Synthesis with Prescriptive Sampling Guidelines

1 code implementation2 May 2019 Ben Mildenhall, Pratul P. Srinivasan, Rodrigo Ortiz-Cayon, Nima Khademi Kalantari, Ravi Ramamoorthi, Ren Ng, Abhishek Kar

We present a practical and robust deep learning solution for capturing and rendering novel views of complex real world scenes for virtual exploration.

Novel View Synthesis

StegaStamp: Invisible Hyperlinks in Physical Photographs

3 code implementations CVPR 2020 Matthew Tancik, Ben Mildenhall, Ren Ng

Printed and digitally displayed photos have the ability to hide imperceptible digital data that can be accessed through internet-connected imaging systems.

Steganographics

DiffuserCam: Lensless Single-exposure 3D Imaging

no code implementations5 Oct 2017 Nick Antipa, Grace Kuo, Reinhard Heckel, Ben Mildenhall, Emrah Bostan, Ren Ng, Laura Waller

We demonstrate a compact and easy-to-build computational camera for single-shot 3D imaging.

Cannot find the paper you are looking for? You can Submit a new open access paper.