Search Results for author: Jonathan T. Barron

Found 51 papers, 27 papers with code

A Generalization of Otsu’s Method and Minimum Error Thresholding

1 code implementation ECCV 2020 Jonathan T. Barron

We present Generalized Histogram Thresholding (GHT), a simple, fast, and effective technique for histogram-based image thresholding.

Binarization

Scalable Font Reconstruction with Dual Latent Manifolds

no code implementations10 Sep 2021 Nikita Srivatsan, Si Wu, Jonathan T. Barron, Taylor Berg-Kirkpatrick

We propose a deep generative model that performs typography analysis and font reconstruction by learning disentangled manifolds of both font style and character shape.

Style Transfer

HyperNeRF: A Higher-Dimensional Representation for Topologically Varying Neural Radiance Fields

1 code implementation24 Jun 2021 Keunhong Park, Utkarsh Sinha, Peter Hedman, Jonathan T. Barron, Sofien Bouaziz, Dan B Goldman, Ricardo Martin-Brualla, Steven M. Seitz

A common approach to reconstruct such non-rigid scenes is through the use of a learned deformation field mapping from coordinates in each input image into a canonical template coordinate space.

Novel View Synthesis

NeRFactor: Neural Factorization of Shape and Reflectance Under an Unknown Illumination

1 code implementation3 Jun 2021 Xiuming Zhang, Pratul P. Srinivasan, Boyang Deng, Paul Debevec, William T. Freeman, Jonathan T. Barron

The key to our approach, which we call Neural Radiance Factorization (NeRFactor), is to distill the volumetric geometry of a Neural Radiance Field (NeRF) [Mildenhall et al. 2020] representation of the object into a surface representation and then jointly refine the geometry while solving for the spatially-varying reflectance and the environment lighting.

Baking Neural Radiance Fields for Real-Time View Synthesis

no code implementations26 Mar 2021 Peter Hedman, Pratul P. Srinivasan, Ben Mildenhall, Jonathan T. Barron, Paul Debevec

Neural volumetric representations such as Neural Radiance Fields (NeRF) have emerged as a compelling technique for learning to represent 3D scenes from images with the goal of rendering photorealistic images of the scene from unobserved viewpoints.

Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields

1 code implementation24 Mar 2021 Jonathan T. Barron, Ben Mildenhall, Matthew Tancik, Peter Hedman, Ricardo Martin-Brualla, Pratul P. Srinivasan

Mip-NeRF is also able to match the accuracy of a brute-force supersampled NeRF on our multiscale dataset while being 22x faster.

IBRNet: Learning Multi-View Image-Based Rendering

no code implementations CVPR 2021 Qianqian Wang, Zhicheng Wang, Kyle Genova, Pratul Srinivasan, Howard Zhou, Jonathan T. Barron, Ricardo Martin-Brualla, Noah Snavely, Thomas Funkhouser

Unlike neural scene representation work that optimizes per-scene functions for rendering, we learn a generic view interpolation function that generalizes to novel scenes.

Neural Rendering Novel View Synthesis

INeRF: Inverting Neural Radiance Fields for Pose Estimation

1 code implementation10 Dec 2020 Lin Yen-Chen, Pete Florence, Jonathan T. Barron, Alberto Rodriguez, Phillip Isola, Tsung-Yi Lin

We then show that for complex real-world scenes from the LLFF dataset, iNeRF can improve NeRF by estimating the camera poses of novel images and using these images as additional training data for NeRF.

Pose Estimation

NeRV: Neural Reflectance and Visibility Fields for Relighting and View Synthesis

no code implementations CVPR 2021 Pratul P. Srinivasan, Boyang Deng, Xiuming Zhang, Matthew Tancik, Ben Mildenhall, Jonathan T. Barron

We present a method that takes as input a set of images of a scene illuminated by unconstrained known lighting, and produces as output a 3D representation that can be rendered from novel viewpoints under arbitrary lighting conditions.

NeRD: Neural Reflectance Decomposition from Image Collections

no code implementations7 Dec 2020 Mark Boss, Raphael Braun, Varun Jampani, Jonathan T. Barron, Ce Liu, Hendrik P. A. Lensch

This problem is inherently more challenging when the illumination is not a single light source under laboratory conditions but is instead an unconstrained environmental illumination.

Learned Initializations for Optimizing Coordinate-Based Neural Representations

2 code implementations CVPR 2021 Matthew Tancik, Ben Mildenhall, Terrance Wang, Divi Schmidt, Pratul P. Srinivasan, Jonathan T. Barron, Ren Ng

Coordinate-based neural representations have shown significant promise as an alternative to discrete, array-based representations for complex low dimensional signals.

Meta-Learning

Nerfies: Deformable Neural Radiance Fields

1 code implementation25 Nov 2020 Keunhong Park, Utkarsh Sinha, Jonathan T. Barron, Sofien Bouaziz, Dan B Goldman, Steven M. Seitz, Ricardo Martin-Brualla

We present the first method capable of photorealistically reconstructing deformable scenes using photos/videos captured casually from mobile phones.

3D Human Reconstruction

How to Train Neural Networks for Flare Removal

no code implementations25 Nov 2020 Yicheng Wu, Qiurui He, Tianfan Xue, Rahul Garg, Jiawen Chen, Ashok Veeraraghavan, Jonathan T. Barron

When a camera is pointed at a strong light source, the resulting photograph may contain lens flare artifacts.

Cross-Camera Convolutional Color Constancy

1 code implementation24 Nov 2020 Mahmoud Afifi, Jonathan T. Barron, Chloe LeGendre, Yun-Ta Tsai, Francois Bleibel

We present "Cross-Camera Convolutional Color Constancy" (C5), a learning-based method, trained on images from multiple cameras, that accurately estimates a scene's illuminant color from raw images captured by a new camera previously unseen during training.

Color Constancy

Light Stage Super-Resolution: Continuous High-Frequency Relighting

no code implementations17 Oct 2020 Tiancheng Sun, Zexiang Xu, Xiuming Zhang, Sean Fanello, Christoph Rhemann, Paul Debevec, Yun-Ta Tsai, Jonathan T. Barron, Ravi Ramamoorthi

The light stage has been widely used in computer graphics for the past two decades, primarily to enable the relighting of human faces.

Super-Resolution

A Convenient Generalization of Schlick's Bias and Gain Functions

no code implementations17 Oct 2020 Jonathan T. Barron

We present a generalization of Schlick's bias and gain functions -- simple parametric curve-shaped functions for inputs in [0, 1].

Shape, Illumination, and Reflectance from Shading

no code implementations7 Oct 2020 Jonathan T. Barron, Jitendra Malik

A fundamental problem in computer vision is that of inferring the intrinsic, 3D structure of the world from flat, 2D images of that world.

Color Constancy

Learned Dual-View Reflection Removal

no code implementations1 Oct 2020 Simon Niklaus, Xuaner Cecilia Zhang, Jonathan T. Barron, Neal Wadhwa, Rahul Garg, Feng Liu, Tianfan Xue

Traditional reflection removal algorithms either use a single image as input, which suffers from intrinsic ambiguities, or use multiple images from a moving camera, which is inconvenient for users.

Reflection Removal

Neural Light Transport for Relighting and View Synthesis

1 code implementation9 Aug 2020 Xiuming Zhang, Sean Fanello, Yun-Ta Tsai, Tiancheng Sun, Tianfan Xue, Rohit Pandey, Sergio Orts-Escolano, Philip Davidson, Christoph Rhemann, Paul Debevec, Jonathan T. Barron, Ravi Ramamoorthi, William T. Freeman

In particular, we show how to fuse previously seen observations of illuminants and views to synthesize a new image of the same scene under a desired lighting condition from a chosen viewpoint.

NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections

1 code implementation CVPR 2021 Ricardo Martin-Brualla, Noha Radwan, Mehdi S. M. Sajjadi, Jonathan T. Barron, Alexey Dosovitskiy, Daniel Duckworth

We present a learning-based method for synthesizing novel views of complex scenes using only unstructured collections of in-the-wild photographs.

A Generalization of Otsu's Method and Minimum Error Thresholding

1 code implementation14 Jul 2020 Jonathan T. Barron

We present Generalized Histogram Thresholding (GHT), a simple, fast, and effective technique for histogram-based image thresholding.

Binarization

Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains

7 code implementations NeurIPS 2020 Matthew Tancik, Pratul P. Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan T. Barron, Ren Ng

We show that passing input points through a simple Fourier feature mapping enables a multilayer perceptron (MLP) to learn high-frequency functions in low-dimensional problem domains.

Sky Optimization: Semantically aware image processing of skies in low-light photography

no code implementations15 Jun 2020 Orly Liba, Longqi Cai, Yun-Ta Tsai, Elad Eban, Yair Movshovitz-Attias, Yael Pritch, Huizhong Chen, Jonathan T. Barron

The sky is a major component of the appearance of a photograph, and its color and tone can strongly influence the mood of a picture.

What Matters in Unsupervised Optical Flow

1 code implementation ECCV 2020 Rico Jonschkowski, Austin Stone, Jonathan T. Barron, Ariel Gordon, Kurt Konolige, Anelia Angelova

We systematically compare and analyze a set of key components in unsupervised optical flow to identify which photometric loss, occlusion handling, and smoothness regularization is most effective.

Occlusion Handling Optical Flow Estimation

Portrait Shadow Manipulation

1 code implementation18 May 2020 Xuaner Cecilia Zhang, Jonathan T. Barron, Yun-Ta Tsai, Rohit Pandey, Xiuming Zhang, Ren Ng, David E. Jacobs

We propose a way to explicitly encode facial symmetry and show that our dataset and training procedure enable the model to generalize to images taken in the wild.

Learning to Autofocus

no code implementations CVPR 2020 Charles Herrmann, Richard Strong Bowen, Neal Wadhwa, Rahul Garg, Qiurui He, Jonathan T. Barron, Ramin Zabih

Autofocus is an important task for digital cameras, yet current approaches often exhibit poor performance.

Depth Estimation

NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis

21 code implementations ECCV 2020 Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, Ren Ng

Our algorithm represents a scene using a fully-connected (non-convolutional) deep network, whose input is a single continuous 5D coordinate (spatial location $(x, y, z)$ and viewing direction $(\theta, \phi)$) and whose output is the volume density and view-dependent emitted radiance at that spatial location.

Neural Rendering Novel View Synthesis

Lighthouse: Predicting Lighting Volumes for Spatially-Coherent Illumination

1 code implementation CVPR 2020 Pratul P. Srinivasan, Ben Mildenhall, Matthew Tancik, Jonathan T. Barron, Richard Tucker, Noah Snavely

We present a deep learning solution for estimating the incident illumination at any 3D location within a scene from an input narrow-baseline stereo image pair.

Boundary Cues for 3D Object Shape Recovery

no code implementations CVPR 2013 Kevin Karsch, Zicheng Liao, Jason Rock, Jonathan T. Barron, Derek Hoiem

Early work in computer vision considered a host of geometric cues for both shape reconstruction and recognition.

Handheld Mobile Photography in Very Low Light

no code implementations24 Oct 2019 Orly Liba, Kiran Murthy, Yun-Ta Tsai, Tim Brooks, Tianfan Xue, Nikhil Karnad, Qiurui He, Jonathan T. Barron, Dillon Sharlet, Ryan Geiss, Samuel W. Hasinoff, Yael Pritch, Marc Levoy

Aside from the physical limits imposed by read noise and photon shot noise, these cameras are typically handheld, have small apertures and sensors, use mass-produced analog electronics that cannot easily be cooled, and are commonly used to photograph subjects that move, like children and pets.

Tone Mapping

Pushing the Boundaries of View Extrapolation with Multiplane Images

1 code implementation CVPR 2019 Pratul P. Srinivasan, Richard Tucker, Jonathan T. Barron, Ravi Ramamoorthi, Ren Ng, Noah Snavely

We present a theoretical analysis showing how the range of views that can be rendered from an MPI increases linearly with the MPI disparity sampling frequency, as well as a novel MPI prediction procedure that theoretically enables view extrapolations of up to $4\times$ the lateral viewpoint movement allowed by prior work.

Learning Single Camera Depth Estimation using Dual-Pixels

1 code implementation ICCV 2019 Rahul Garg, Neal Wadhwa, Sameer Ansari, Jonathan T. Barron

Using our approach, existing monocular depth estimation techniques can be effectively applied to dual-pixel data, and much smaller models can be constructed that still infer high quality depth.

Monocular Depth Estimation

Stereoscopic Dark Flash for Low-light Photography

no code implementations5 Jan 2019 Jian Wang, Tianfan Xue, Jonathan T. Barron, Jiawen Chen

In this work, we present a camera configuration for acquiring "stereoscopic dark flash" images: a simultaneous stereo pair in which one camera is a conventional RGB sensor, but the other camera is sensitive to near-infrared and near-ultraviolet instead of R and B.

Learning to Synthesize Motion Blur

no code implementations CVPR 2019 Tim Brooks, Jonathan T. Barron

We present a technique for synthesizing a motion blurred image from a pair of unblurred images captured in succession.

Aperture Supervision for Monocular Depth Estimation

no code implementations CVPR 2018 Pratul P. Srinivasan, Rahul Garg, Neal Wadhwa, Ren Ng, Jonathan T. Barron

We present a novel method to train machine learning algorithms to estimate scene depths from a single image, by using the information provided by a camera's aperture as supervision.

Monocular Depth Estimation

Deep Bilateral Learning for Real-Time Image Enhancement

2 code implementations10 Jul 2017 Michaël Gharbi, Jiawen Chen, Jonathan T. Barron, Samuel W. Hasinoff, Frédo Durand

For this, we introduce a new neural network architecture inspired by bilateral grid processing and local affine color transforms.

Image Enhancement

Continuously Differentiable Exponential Linear Units

no code implementations24 Apr 2017 Jonathan T. Barron

Exponential Linear Units (ELUs) are a useful rectifier for constructing deep learning architectures, as they may speed up and otherwise improve learning by virtue of not have vanishing gradients and by having mean activations near zero.

A General and Adaptive Robust Loss Function

3 code implementations CVPR 2019 Jonathan T. Barron

We present a generalization of the Cauchy/Lorentzian, Geman-McClure, Welsch/Leclerc, generalized Charbonnier, Charbonnier/pseudo-Huber/L1-L2, and L2 loss functions.

Image Generation

Fast Fourier Color Constancy

1 code implementation CVPR 2017 Jonathan T. Barron, Yun-Ta Tsai

We present Fast Fourier Color Constancy (FFCC), a color constancy algorithm which solves illuminant estimation by reducing it to a spatial localization task on a torus.

Color Constancy

The Fast Bilateral Solver

2 code implementations10 Nov 2015 Jonathan T. Barron, Ben Poole

We present the bilateral solver, a novel algorithm for edge-aware smoothing that combines the flexibility and speed of simple filtering approaches with the accuracy of domain-specific optimization algorithms.

Colorization Semantic Segmentation

Convolutional Color Constancy

no code implementations ICCV 2015 Jonathan T. Barron

Color constancy is the problem of inferring the color of the light that illuminated a scene, usually so that the illumination color can be removed.

Color Constancy Object Detection +1

Fast Bilateral-Space Stereo for Synthetic Defocus

no code implementations CVPR 2015 Jonathan T. Barron, Andrew Adams, YiChang Shih, Carlos Hernandez

Given a stereo pair it is possible to recover a depth map and use that depth to render a synthetically defocused image.

Multiscale Combinatorial Grouping for Image Segmentation and Object Proposal Generation

1 code implementation3 Mar 2015 Jordi Pont-Tuset, Pablo Arbelaez, Jonathan T. Barron, Ferran Marques, Jitendra Malik

We propose a unified approach for bottom-up hierarchical image segmentation and object proposal generation for recognition, called Multiscale Combinatorial Grouping (MCG).

BSDS500 Object Proposal Generation +1

Multiscale Combinatorial Grouping

no code implementations CVPR 2014 Pablo Arbelaez, Jordi Pont-Tuset, Jonathan T. Barron, Ferran Marques, Jitendra Malik

We propose a unified approach for bottom-up hierarchical image segmentation and object candidate generation for recognition, called Multiscale Combinatorial Grouping (MCG).

BSDS500 Semantic Segmentation

Intrinsic Scene Properties from a Single RGB-D Image

no code implementations CVPR 2013 Jonathan T. Barron, Jitendra Malik

Our model takes as input a single RGB-D image and produces as output an improved depth map, a set of surface normals, a reflectance image, a shading image, and a spatially varying model of illumination.

Cannot find the paper you are looking for? You can Submit a new open access paper.