Defocus Deblurring Using Dual-Pixel Data

ECCV 2020  ·  Abdullah Abuolaim, Michael S. Brown ·

Defocus blur arises in images that are captured with a shallow depth of field due to the use of a wide aperture. Correcting defocus blur is challenging because the blur is spatially varying and difficult to estimate. We propose an effective defocus deblurring method that exploits data available on dual-pixel (DP) sensors found on most modern cameras. DP sensors are used to assist a camera's auto-focus by capturing two sub-aperture views of the scene in a single image shot. The two sub-aperture images are used to calculate the appropriate lens position to focus on a particular scene region and are discarded afterwards. We introduce a deep neural network (DNN) architecture that uses these discarded sub-aperture images to reduce defocus blur. A key contribution of our effort is a carefully captured dataset of 500 scenes (2000 images) where each scene has: (i) an image with defocus blur captured at a large aperture; (ii) the two associated DP sub-aperture views; and (iii) the corresponding all-in-focus image captured with a small aperture. Our proposed DNN produces results that are significantly better than conventional single image methods in terms of both quantitative and perceptual metrics -- all from data that is already available on the camera but ignored. The dataset, code, and trained models are available at https://github.com/Abdullah-Abuolaim/defocus-deblurring-dual-pixel.

PDF Abstract ECCV 2020 PDF ECCV 2020 Abstract

Datasets


Introduced in the Paper:

DPD (Dual-view)

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Defocus Deblurring DPD DPDNet Combined PSNR 25.13 # 9
Combined SSIM 0.786 # 8
LPIPS 0.277 # 9
Image Defocus Deblurring DPD (Dual-view) DPDNet PSNR 24.34 # 14
SSIM 0.747 # 14
LPIPS 0.277 # 9

Methods


No methods listed for this paper. Add relevant methods here