Image Defocus Deblurring
9 papers with code • 1 benchmarks • 0 datasets
Since convolutional neural networks (CNNs) perform well at learning generalizable image priors from large-scale data, these models have been extensively applied to image restoration and related tasks.
Powered by these two designs, Uformer enjoys a high capability for capturing both local and global dependencies for image restoration.
In particular, we estimate the blur amounts of different regions by the internal geometric constraint of the DP data, which measures the defocus disparity between the left and right views.
Specifically, we show that jointly learning to predict the two DP views from a single blurry input image improves the network's ability to learn to deblur the image.
To utilize the property with inverse kernels, we exploit the observation that when only the size of a defocus blur changes while keeping the shape, the shape of the corresponding inverse kernel remains the same and only the scale changes.
Defocus blur is one kind of blur effects often seen in images, which is challenging to remove due to its spatially variant amount.
It has been a common practice to adopt the ResBlock, which learns the difference between blurry and sharp image pairs, in end-to-end image deblurring architectures.