no code implementations • 19 Feb 2024 • Christian Reiser, Stephan Garbin, Pratul P. Srinivasan, Dor Verbin, Richard Szeliski, Ben Mildenhall, Jonathan T. Barron, Peter Hedman, Andreas Geiger
Third, we minimize the binary entropy of the opacity values, which facilitates the extraction of surface geometry by encouraging opacity values to binarize towards the end of training.
no code implementations • 1 Jan 2024 • Mia Gaia Polansky, Charles Herrmann, Junhwa Hur, Deqing Sun, Dor Verbin, Todd Zickler
We present a differentiable model that infers explicit boundaries, including curves, corners and junctions, using a mechanism that we call boundary attention.
no code implementations • 11 Dec 2023 • Pratul P. Srinivasan, Stephan J. Garbin, Dor Verbin, Jonathan T. Barron, Ben Mildenhall
We present a UV mapping method designed to operate on geometry produced by 3D reconstruction and generation techniques.
no code implementations • 5 Dec 2023 • Rundi Wu, Ben Mildenhall, Philipp Henzler, Keunhong Park, Ruiqi Gao, Daniel Watson, Pratul P. Srinivasan, Dor Verbin, Jonathan T. Barron, Ben Poole, Aleksander Holynski
3D reconstruction methods such as Neural Radiance Fields (NeRFs) excel at rendering photorealistic novel views of complex scenes.
no code implementations • 4 Dec 2023 • Xiaojuan Wang, Janne Kontkanen, Brian Curless, Steve Seitz, Ira Kemelmacher, Ben Mildenhall, Pratul Srinivasan, Dor Verbin, Aleksander Holynski
We present a method that uses a text-to-image model to generate consistent content across multiple image scales, enabling extreme semantic zooms into a scene, e. g., ranging from a wide-angle landscape view of a forest to a macro shot of an insect sitting on one of the tree branches.
no code implementations • 25 May 2023 • Dor Verbin, Ben Mildenhall, Peter Hedman, Jonathan T. Barron, Todd Zickler, Pratul P. Srinivasan
We present a method based on differentiable Monte Carlo ray tracing that uses images of an object to jointly recover its spatially-varying materials, the surrounding illumination environment, and the shapes of the unseen light occluders who inadvertently cast shadows upon it.
1 code implementation • ICCV 2023 • Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul P. Srinivasan, Peter Hedman
Neural Radiance Field training can be accelerated through the use of grid-based representations in NeRF's learned mapping from spatial coordinates to colors and volumetric density.
no code implementations • ICCV 2023 • Alexander Mai, Dor Verbin, Falko Kuester, Sara Fridovich-Keil
We present Neural Microfacet Fields, a method for recovering materials, geometry, and environment illumination from images of a scene.
no code implementations • 28 Feb 2023 • Lior Yariv, Peter Hedman, Christian Reiser, Dor Verbin, Pratul P. Srinivasan, Richard Szeliski, Jonathan T. Barron, Ben Mildenhall
We present a method for reconstructing high-quality meshes of large unbounded real-world scenes suitable for photorealistic novel view synthesis.
no code implementations • 23 Feb 2023 • Christian Reiser, Richard Szeliski, Dor Verbin, Pratul P. Srinivasan, Ben Mildenhall, Andreas Geiger, Jonathan T. Barron, Peter Hedman
We design a lossless procedure for baking the parameterization used during training into a model that achieves real-time rendering while still preserving the photorealistic view synthesis quality of a volumetric radiance field.
2 code implementations • CVPR 2022 • Dor Verbin, Peter Hedman, Ben Mildenhall, Todd Zickler, Jonathan T. Barron, Pratul P. Srinivasan
Neural Radiance Fields (NeRF) is a popular view synthesis technique that represents a scene as a continuous volumetric function, parameterized by multilayer perceptrons that provide the volume density and view-dependent emitted radiance at each location.
1 code implementation • CVPR 2022 • Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul P. Srinivasan, Peter Hedman
Though neural radiance fields (NeRF) have demonstrated impressive view synthesis results on objects and small bounded regions of space, they struggle on "unbounded" scenes, where the camera may point in any direction and content may exist at any distance.
1 code implementation • ICCV 2021 • Dor Verbin, Todd Zickler
We introduce a bottom-up model for simultaneously finding many boundary elements in an image, including contours, corners and junctions.
1 code implementation • CVPR 2020 • Dor Verbin, Todd Zickler
An equilibrium of this game yields two things: an estimate of the 2. 5D surface from the shape process, and a stochastic texture synthesis model from the texture process.
no code implementations • 19 Mar 2020 • Dor Verbin, Steven J. Gortler, Todd Zickler
We present a sufficient condition for recovering unique texture and viewpoints from unknown orthographic projections of a flat texture process.
no code implementations • 11 Oct 2016 • Adi Perry, Dor Verbin, Nahum Kiryati
The indication can be by sound, display, vibration, and various communication modalities provided by the Android device.