Search Results for author: Dor Verbin

Found 22 papers, 6 papers with code

Generative Multiview Relighting for 3D Reconstruction under Extreme Illumination Variation

no code implementations19 Dec 2024 Hadi AlZayer, Philipp Henzler, Jonathan T. Barron, Jia-Bin Huang, Pratul P. Srinivasan, Dor Verbin

We present an approach that reconstructs objects from images taken under different illuminations by first relighting the images under a single reference illumination with a multiview relighting diffusion model and then reconstructing the object's geometry and appearance with a radiance field architecture that is robust to the small remaining inconsistencies among the relit images.

3D Reconstruction

SimVS: Simulating World Inconsistencies for Robust View Synthesis

no code implementations10 Dec 2024 Alex Trevithick, Roni Paiss, Philipp Henzler, Dor Verbin, Rundi Wu, Hadi AlZayer, Ruiqi Gao, Ben Poole, Jonathan T. Barron, Aleksander Holynski, Ravi Ramamoorthi, Pratul P. Srinivasan

Novel-view synthesis techniques achieve impressive results for static scenes but struggle when faced with the inconsistencies inherent to casual capture settings: varying illumination, scene motion, and other unintended effects that are difficult to model explicitly.

Novel View Synthesis

EVER: Exact Volumetric Ellipsoid Rendering for Real-time View Synthesis

no code implementations2 Oct 2024 Alexander Mai, Peter Hedman, George Kopanas, Dor Verbin, David Futschik, Qiangeng Xu, Falko Kuester, Jonathan T. Barron, yinda zhang

We present Exact Volumetric Ellipsoid Rendering (EVER), a method for real-time differentiable emission-only volume rendering.

Flash Cache: Reducing Bias in Radiance Cache Based Inverse Rendering

no code implementations9 Sep 2024 Benjamin Attal, Dor Verbin, Ben Mildenhall, Peter Hedman, Jonathan T. Barron, Matthew O'Toole, Pratul P. Srinivasan

State-of-the-art techniques for 3D reconstruction are largely based on volumetric scene representations, which require sampling multiple points to compute the color arriving along a ray.

3D Reconstruction Inverse Rendering

IllumiNeRF: 3D Relighting Without Inverse Rendering

no code implementations10 Jun 2024 Xiaoming Zhao, Pratul P. Srinivasan, Dor Verbin, Keunhong Park, Ricardo Martin Brualla, Philipp Henzler

Existing methods for relightable view synthesis -- using a set of images of an object under unknown lighting to recover a 3D representation that can be rendered from novel viewpoints under a target illumination -- are based on inverse rendering, and attempt to disentangle the object geometry, materials, and lighting that explain the input images.

Inverse Rendering Object

NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections

no code implementations23 May 2024 Dor Verbin, Pratul P. Srinivasan, Peter Hedman, Ben Mildenhall, Benjamin Attal, Richard Szeliski, Jonathan T. Barron

Neural Radiance Fields (NeRFs) typically struggle to reconstruct and render highly specular objects, whose appearance varies quickly with changes in viewpoint.

Binary Opacity Grids: Capturing Fine Geometric Detail for Mesh-Based View Synthesis

no code implementations19 Feb 2024 Christian Reiser, Stephan Garbin, Pratul P. Srinivasan, Dor Verbin, Richard Szeliski, Ben Mildenhall, Jonathan T. Barron, Peter Hedman, Andreas Geiger

Third, we minimize the binary entropy of the opacity values, which facilitates the extraction of surface geometry by encouraging opacity values to binarize towards the end of training.

Boundary Attention: Learning curves, corners, junctions and grouping

no code implementations1 Jan 2024 Mia Gaia Polansky, Charles Herrmann, Junhwa Hur, Deqing Sun, Dor Verbin, Todd Zickler

We present a lightweight network that infers grouping and boundaries, including curves, corners and junctions.

Nuvo: Neural UV Mapping for Unruly 3D Representations

no code implementations11 Dec 2023 Pratul P. Srinivasan, Stephan J. Garbin, Dor Verbin, Jonathan T. Barron, Ben Mildenhall

We present a UV mapping method designed to operate on geometry produced by 3D reconstruction and generation techniques.

3D Reconstruction valid

Generative Powers of Ten

no code implementations CVPR 2024 Xiaojuan Wang, Janne Kontkanen, Brian Curless, Steve Seitz, Ira Kemelmacher, Ben Mildenhall, Pratul Srinivasan, Dor Verbin, Aleksander Holynski

We present a method that uses a text-to-image model to generate consistent content across multiple image scales, enabling extreme semantic zooms into a scene, e. g., ranging from a wide-angle landscape view of a forest to a macro shot of an insect sitting on one of the tree branches.

Image Super-Resolution

Eclipse: Disambiguating Illumination and Materials using Unintended Shadows

no code implementations CVPR 2024 Dor Verbin, Ben Mildenhall, Peter Hedman, Jonathan T. Barron, Todd Zickler, Pratul P. Srinivasan

We present a method based on differentiable Monte Carlo ray tracing that uses images of an object to jointly recover its spatially-varying materials, the surrounding illumination environment, and the shapes of the unseen light occluders who inadvertently cast shadows upon it.

Inverse Rendering

Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields

1 code implementation ICCV 2023 Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul P. Srinivasan, Peter Hedman

Neural Radiance Field training can be accelerated through the use of grid-based representations in NeRF's learned mapping from spatial coordinates to colors and volumetric density.

Novel View Synthesis

Neural Microfacet Fields for Inverse Rendering

no code implementations ICCV 2023 Alexander Mai, Dor Verbin, Falko Kuester, Sara Fridovich-Keil

We present Neural Microfacet Fields, a method for recovering materials, geometry, and environment illumination from images of a scene.

Inverse Rendering Novel View Synthesis

BakedSDF: Meshing Neural SDFs for Real-Time View Synthesis

1 code implementation28 Feb 2023 Lior Yariv, Peter Hedman, Christian Reiser, Dor Verbin, Pratul P. Srinivasan, Richard Szeliski, Jonathan T. Barron, Ben Mildenhall

We present a method for reconstructing high-quality meshes of large unbounded real-world scenes suitable for photorealistic novel view synthesis.

Novel View Synthesis

MERF: Memory-Efficient Radiance Fields for Real-time View Synthesis in Unbounded Scenes

no code implementations23 Feb 2023 Christian Reiser, Richard Szeliski, Dor Verbin, Pratul P. Srinivasan, Ben Mildenhall, Andreas Geiger, Jonathan T. Barron, Peter Hedman

We design a lossless procedure for baking the parameterization used during training into a model that achieves real-time rendering while still preserving the photorealistic view synthesis quality of a volumetric radiance field.

Ref-NeRF: Structured View-Dependent Appearance for Neural Radiance Fields

2 code implementations CVPR 2022 Dor Verbin, Peter Hedman, Ben Mildenhall, Todd Zickler, Jonathan T. Barron, Pratul P. Srinivasan

Neural Radiance Fields (NeRF) is a popular view synthesis technique that represents a scene as a continuous volumetric function, parameterized by multilayer perceptrons that provide the volume density and view-dependent emitted radiance at each location.

Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields

1 code implementation CVPR 2022 Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul P. Srinivasan, Peter Hedman

Though neural radiance fields (NeRF) have demonstrated impressive view synthesis results on objects and small bounded regions of space, they struggle on "unbounded" scenes, where the camera may point in any direction and content may exist at any distance.

Image Reconstruction Novel View Synthesis

Field of Junctions: Extracting Boundary Structure at Low SNR

1 code implementation ICCV 2021 Dor Verbin, Todd Zickler

We introduce a bottom-up model for simultaneously finding many boundary elements in an image, including contours, corners and junctions.

image smoothing Junction Detection

Toward a Universal Model for Shape From Texture

1 code implementation CVPR 2020 Dor Verbin, Todd Zickler

An equilibrium of this game yields two things: an estimate of the 2. 5D surface from the shape process, and a stochastic texture synthesis model from the texture process.

Shape from Texture Texture Synthesis

Unique Geometry and Texture from Corresponding Image Patches

no code implementations19 Mar 2020 Dor Verbin, Steven J. Gortler, Todd Zickler

We present a sufficient condition for recovering unique texture and viewpoints from unknown orthographic projections of a flat texture process.

Shape from Texture

Crossing the Road Without Traffic Lights: An Android-based Safety Device

no code implementations11 Oct 2016 Adi Perry, Dor Verbin, Nahum Kiryati

The indication can be by sound, display, vibration, and various communication modalities provided by the Android device.

Optical Flow Estimation

Cannot find the paper you are looking for? You can Submit a new open access paper.