Defocus Deblurring Using Dual-Pixel Data

ECCV 2020  ·  Abdullah Abuolaim, Michael S. Brown ·

Defocus blur arises in images that are captured with a shallow depth of field due to the use of a wide aperture. Correcting defocus blur is challenging because the blur is spatially varying and difficult to estimate. We propose an effective defocus deblurring method that exploits data available on dual-pixel (DP) sensors found on most modern cameras. DP sensors are used to assist a camera's auto-focus by capturing two sub-aperture views of the scene in a single image shot. The two sub-aperture images are used to calculate the appropriate lens position to focus on a particular scene region and are discarded afterwards. We introduce a deep neural network (DNN) architecture that uses these discarded sub-aperture images to reduce defocus blur. A key contribution of our effort is a carefully captured dataset of 500 scenes (2000 images) where each scene has: (i) an image with defocus blur captured at a large aperture; (ii) the two associated DP sub-aperture views; and (iii) the corresponding all-in-focus image captured with a small aperture. Our proposed DNN produces results that are significantly better than conventional single image methods in terms of both quantitative and perceptual metrics -- all from data that is already available on the camera but ignored. The dataset, code, and trained models are available at

PDF Abstract ECCV 2020 PDF ECCV 2020 Abstract


Introduced in the Paper:

DPD (Dual-view)

Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Defocus Deblurring DPD DPDNet Combined PSNR 25.13 # 6
Combined SSIM 0.786 # 5
LPIPS 0.277 # 6
Image Defocus Deblurring DPD (Dual-view) DPDNet PSNR 24.34 # 13
SSIM 0.747 # 13
LPIPS 0.277 # 8


No methods listed for this paper. Add relevant methods here