Recent vision transformers along with self-attention have achieved promising results on various computer vision tasks.
Recent image inpainting methods have shown promising results due to the power of deep learning, which can explore external information available from the large training dataset.
Recent single-image super-resolution (SISR) networks, which can adapt their network parameters to specific input images, have shown promising results by exploiting the information available within the input data as well as large external datasets.
Recent image inpainting methods show promising results due to the power of deep learning, which can explore external information available from a large training dataset.
We propose a new approach for the image super-resolution (SR) task that progressively restores a high-resolution (HR) image from an input low-resolution (LR) image on the basis of a neural ordinary differential equation.
Despite the advances in the field of generative models in computer vision, video stabilization still lacks a pure regressive deep-learning-based formulation.
Finally, we show that our meta-learning framework can be easily employed to any video frame interpolation network and can consistently improve its performance on multiple benchmark datasets.
However, these methods have limitations in using internal information available in a given test image.
We analyze the restoration performance of the fine-tuned video denoising networks with the proposed self-supervision-based learning algorithm, and demonstrate that the FCN can utilize recurring patches without requiring accurate registration among adjacent frames.
Under certain statistical assumptions of noise, recent self-supervised approaches for denoising have been introduced to learn network parameters without true clean images, and these methods can restore an image by exploiting information available from the given input (i. e., internal statistics) at test time.
In the training stage, we train the network via meta-learning; thus, the network can quickly adapt to any input image at test time.
Restoring a sharp light field image from its blurry input has become essential due to the increasing popularity of parallax-based image processing.
State-of-the-art video restoration methods integrate optical flow estimation networks to utilize temporal information.
To remove these complicated motion blurs, conventional energy optimization based methods rely on simple assumptions such that blur kernel is partially uniform or locally linear.
Ranked #13 on Deblurring on RealBlur-R (trained on GoPro) (SSIM (sRGB) metric)
We also provide a novel analysis on the blur kernel at object boundaries, which shows the distinctive characteristics of the blur kernel that cannot be captured by conventional blur models.
We infer bidirectional optical flows to handle motion blurs, and also estimate Gaussian blur maps to remove optical blur from defocus in our new blur model.