Search Results for author: Pratul P. Srinivasan

Found 34 papers, 16 papers with code

Baking Neural Radiance Fields for Real-Time View Synthesis

1 code implementation ICCV 2021 Peter Hedman, Pratul P. Srinivasan, Ben Mildenhall, Jonathan T. Barron, Paul Debevec

Neural volumetric representations such as Neural Radiance Fields (NeRF) have emerged as a compelling technique for learning to represent 3D scenes from images with the goal of rendering photorealistic images of the scene from unobserved viewpoints.

Pushing the Boundaries of View Extrapolation with Multiplane Images

1 code implementation CVPR 2019 Pratul P. Srinivasan, Richard Tucker, Jonathan T. Barron, Ravi Ramamoorthi, Ren Ng, Noah Snavely

We present a theoretical analysis showing how the range of views that can be rendered from an MPI increases linearly with the MPI disparity sampling frequency, as well as a novel MPI prediction procedure that theoretically enables view extrapolations of up to $4\times$ the lateral viewpoint movement allowed by prior work.

NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis

36 code implementations ECCV 2020 Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, Ren Ng

Our algorithm represents a scene using a fully-connected (non-convolutional) deep network, whose input is a single continuous 5D coordinate (spatial location $(x, y, z)$ and viewing direction $(\theta, \phi)$) and whose output is the volume density and view-dependent emitted radiance at that spatial location.

Generalizable Novel View Synthesis Low-Dose X-Ray Ct Reconstruction +2

Local Light Field Fusion: Practical View Synthesis with Prescriptive Sampling Guidelines

1 code implementation2 May 2019 Ben Mildenhall, Pratul P. Srinivasan, Rodrigo Ortiz-Cayon, Nima Khademi Kalantari, Ravi Ramamoorthi, Ren Ng, Abhishek Kar

We present a practical and robust deep learning solution for capturing and rendering novel views of complex real world scenes for virtual exploration.

Novel View Synthesis

Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields

4 code implementations ICCV 2021 Jonathan T. Barron, Ben Mildenhall, Matthew Tancik, Peter Hedman, Ricardo Martin-Brualla, Pratul P. Srinivasan

Mip-NeRF is also able to match the accuracy of a brute-force supersampled NeRF on our multiscale dataset while being 22x faster.

Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields

1 code implementation CVPR 2022 Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul P. Srinivasan, Peter Hedman

Though neural radiance fields (NeRF) have demonstrated impressive view synthesis results on objects and small bounded regions of space, they struggle on "unbounded" scenes, where the camera may point in any direction and content may exist at any distance.

Ref-NeRF: Structured View-Dependent Appearance for Neural Radiance Fields

2 code implementations CVPR 2022 Dor Verbin, Peter Hedman, Ben Mildenhall, Todd Zickler, Jonathan T. Barron, Pratul P. Srinivasan

Neural Radiance Fields (NeRF) is a popular view synthesis technique that represents a scene as a continuous volumetric function, parameterized by multilayer perceptrons that provide the volume density and view-dependent emitted radiance at each location.

Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains

13 code implementations NeurIPS 2020 Matthew Tancik, Pratul P. Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan T. Barron, Ren Ng

We show that passing input points through a simple Fourier feature mapping enables a multilayer perceptron (MLP) to learn high-frequency functions in low-dimensional problem domains.

HumanNeRF: Free-viewpoint Rendering of Moving People from Monocular Video

1 code implementation CVPR 2022 Chung-Yi Weng, Brian Curless, Pratul P. Srinivasan, Jonathan T. Barron, Ira Kemelmacher-Shlizerman

Our method optimizes for a volumetric representation of the person in a canonical T-pose, in concert with a motion field that maps the estimated canonical representation to every frame of the video via backward warps.

Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields

1 code implementation ICCV 2023 Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul P. Srinivasan, Peter Hedman

Neural Radiance Field training can be accelerated through the use of grid-based representations in NeRF's learned mapping from spatial coordinates to colors and volumetric density.

Novel View Synthesis

Learned Initializations for Optimizing Coordinate-Based Neural Representations

3 code implementations CVPR 2021 Matthew Tancik, Ben Mildenhall, Terrance Wang, Divi Schmidt, Pratul P. Srinivasan, Jonathan T. Barron, Ren Ng

Coordinate-based neural representations have shown significant promise as an alternative to discrete, array-based representations for complex low dimensional signals.

Meta-Learning

Learning to Synthesize a 4D RGBD Light Field from a Single Image

1 code implementation ICCV 2017 Pratul P. Srinivasan, Tongzhou Wang, Ashwin Sreelal, Ravi Ramamoorthi, Ren Ng

We present a machine learning algorithm that takes as input a 2D RGB image and synthesizes a 4D RGBD light field (color and depth of the scene in each ray direction).

Depth Estimation

Lighthouse: Predicting Lighting Volumes for Spatially-Coherent Illumination

1 code implementation CVPR 2020 Pratul P. Srinivasan, Ben Mildenhall, Matthew Tancik, Jonathan T. Barron, Richard Tucker, Noah Snavely

We present a deep learning solution for estimating the incident illumination at any 3D location within a scene from an input narrow-baseline stereo image pair.

Lighting Estimation

Aperture Supervision for Monocular Depth Estimation

no code implementations CVPR 2018 Pratul P. Srinivasan, Rahul Garg, Neal Wadhwa, Ren Ng, Jonathan T. Barron

We present a novel method to train machine learning algorithms to estimate scene depths from a single image, by using the information provided by a camera's aperture as supervision.

Monocular Depth Estimation

Light Field Blind Motion Deblurring

no code implementations CVPR 2017 Pratul P. Srinivasan, Ren Ng, Ravi Ramamoorthi

We study the problem of deblurring light fields of general 3D scenes captured under 3D camera motion and present both theoretical and practical contributions.

Deblurring

Oriented Light-Field Windows for Scene Flow

no code implementations ICCV 2015 Pratul P. Srinivasan, Michael W. Tao, Ren Ng, Ravi Ramamoorthi

2D spatial image windows are used for comparing pixel values in computer vision applications such as correspondence for optical flow and 3D reconstruction, bilateral filtering, and image segmentation.

3D Reconstruction Image Segmentation +3

NeRV: Neural Reflectance and Visibility Fields for Relighting and View Synthesis

no code implementations CVPR 2021 Pratul P. Srinivasan, Boyang Deng, Xiuming Zhang, Matthew Tancik, Ben Mildenhall, Jonathan T. Barron

We present a method that takes as input a set of images of a scene illuminated by unconstrained known lighting, and produces as output a 3D representation that can be rendered from novel viewpoints under arbitrary lighting conditions.

Defocus Map Estimation and Deblurring from a Single Dual-Pixel Image

no code implementations ICCV 2021 Shumian Xin, Neal Wadhwa, Tianfan Xue, Jonathan T. Barron, Pratul P. Srinivasan, Jiawen Chen, Ioannis Gkioulekas, Rahul Garg

We use data captured with a consumer smartphone camera to demonstrate that, after a one-time calibration step, our approach improves upon prior works for both defocus map estimation and blur removal, despite being entirely unsupervised.

Deblurring

Urban Radiance Fields

no code implementations CVPR 2022 Konstantinos Rematas, Andrew Liu, Pratul P. Srinivasan, Jonathan T. Barron, Andrea Tagliasacchi, Thomas Funkhouser, Vittorio Ferrari

The goal of this work is to perform 3D reconstruction and novel view synthesis from data captured by scanning platforms commonly deployed for world mapping in urban outdoor environments (e. g., Street View).

3D Reconstruction Novel View Synthesis

Gravitationally Lensed Black Hole Emission Tomography

no code implementations CVPR 2022 Aviad Levis, Pratul P. Srinivasan, Andrew A. Chael, Ren Ng, Katherine L. Bouman

In this work, we propose BH-NeRF, a novel tomography approach that leverages gravitational lensing to recover the continuous 3D emission field near a black hole.

3D Reconstruction

PersonNeRF: Personalized Reconstruction from Photo Collections

no code implementations CVPR 2023 Chung-Yi Weng, Pratul P. Srinivasan, Brian Curless, Ira Kemelmacher-Shlizerman

We present PersonNeRF, a method that takes a collection of photos of a subject (e. g. Roger Federer) captured across multiple years with arbitrary body poses and appearances, and enables rendering the subject with arbitrary novel combinations of viewpoint, body pose, and appearance.

MERF: Memory-Efficient Radiance Fields for Real-time View Synthesis in Unbounded Scenes

no code implementations23 Feb 2023 Christian Reiser, Richard Szeliski, Dor Verbin, Pratul P. Srinivasan, Ben Mildenhall, Andreas Geiger, Jonathan T. Barron, Peter Hedman

We design a lossless procedure for baking the parameterization used during training into a model that achieves real-time rendering while still preserving the photorealistic view synthesis quality of a volumetric radiance field.

BakedSDF: Meshing Neural SDFs for Real-Time View Synthesis

no code implementations28 Feb 2023 Lior Yariv, Peter Hedman, Christian Reiser, Dor Verbin, Pratul P. Srinivasan, Richard Szeliski, Jonathan T. Barron, Ben Mildenhall

We present a method for reconstructing high-quality meshes of large unbounded real-world scenes suitable for photorealistic novel view synthesis.

Novel View Synthesis

Eclipse: Disambiguating Illumination and Materials using Unintended Shadows

no code implementations25 May 2023 Dor Verbin, Ben Mildenhall, Peter Hedman, Jonathan T. Barron, Todd Zickler, Pratul P. Srinivasan

We present a method based on differentiable Monte Carlo ray tracing that uses images of an object to jointly recover its spatially-varying materials, the surrounding illumination environment, and the shapes of the unseen light occluders who inadvertently cast shadows upon it.

Inverse Rendering

Single View Refractive Index Tomography with Neural Fields

no code implementations8 Sep 2023 Brandon Zhao, Aviad Levis, Liam Connor, Pratul P. Srinivasan, Katherine L. Bouman

The effects of such fields appear in many scientific computer vision settings, ranging from refraction due to transparent cells in microscopy to the lensing of distant galaxies caused by dark matter in astrophysics.

3D Reconstruction

Orbital Polarimetric Tomography of a Flare Near the Sagittarius A* Supermassive Black Hole

no code implementations11 Oct 2023 Aviad Levis, Andrew A. Chael, Katherine L. Bouman, Maciek Wielgus, Pratul P. Srinivasan

One proposed mechanism that produces flares is the formation of compact, bright regions that appear within the accretion disk and close to the event horizon.

3D Reconstruction

Nuvo: Neural UV Mapping for Unruly 3D Representations

no code implementations11 Dec 2023 Pratul P. Srinivasan, Stephan J. Garbin, Dor Verbin, Jonathan T. Barron, Ben Mildenhall

We present a UV mapping method designed to operate on geometry produced by 3D reconstruction and generation techniques.

3D Reconstruction valid

Binary Opacity Grids: Capturing Fine Geometric Detail for Mesh-Based View Synthesis

no code implementations19 Feb 2024 Christian Reiser, Stephan Garbin, Pratul P. Srinivasan, Dor Verbin, Richard Szeliski, Ben Mildenhall, Jonathan T. Barron, Peter Hedman, Andreas Geiger

Third, we minimize the binary entropy of the opacity values, which facilitates the extraction of surface geometry by encouraging opacity values to binarize towards the end of training.

Cannot find the paper you are looking for? You can Submit a new open access paper.