To address these issues, we first propose a new shadow illumination model for the shadow removal task.
However, the procedures of correcting underexposure and overexposure to normal exposures are much different from each other, leading to large discrepancies for the network in correcting multiple exposures, thus resulting in poor performance.
Despite the remarkable progress, existing state-of-the-art Pan-sharpening methods don't explicitly enforce the complementary information learning between two modalities of PAN and MS images.
Pan-sharpening aims to obtain high-resolution multispectral (MS) images for remote sensing systems and deep learning-based methods have achieved remarkable success.
It is based on our observation that deep degradation representations can be clustered by degradation characteristics (types of rain) while independent of image content.
Deep learning provides a new avenue for image restoration, which demands a delicate balance between fine-grained details and high-level contextualized information during recovering the latent clear image.
The proposed model is capable of achieving superior performance on both inhomogeneous and incremental datasets, and is promising for highly compact systems to gradually learn myriad regularities of the different types of rain streaks.
In this paper, we aim to improve the performance of compact VSR networks without changing their original architectures, through a knowledge distillation approach that transfers knowledge from a complicated VSR network to a compact one.
Our approach, termed Twice Mixing, is motivated by the observation that a mid-quality image can be generated by mixing a high-quality image with its low-quality version.
Specifically, we design a variational model to formulate the image de-blocking problem and propose two prior terms for the image content and gradient, respectively.
Equipped with our NR algorithm, the deep model can be trained on a list of synthetic rainy datasets by overcoming catastrophic forgetting, making it a general-version de-raining network.
In unconstrained real-world surveillance scenarios, person re-identification (Re-ID) models usually suffer from different low-level perceptual variations, e. g., cross-resolution and insufficient lighting.
However, the real noisy images in practical are mostly of high resolution rather than the cropped small patches and the vanilla training strategies ignore the cross-patch contextual dependency in the whole image.
Person re-identification (Re-ID) in real-world scenarios usually suffers from various degradation factors, e. g., low-resolution, weak illumination, blurring and adverse weather.
Using the new image pair, the denoising network learns to generate clean and high-quality images from noisy observations.
Then a more accurate spatial preservation based on local gradient constraints is incorporated into the objective to fully utilize spatial information contained in the PAN image.
Existing methods for single images raindrop removal either have poor robustness or suffer from parameter burdens.
We propose a simple yet effective deep tree-structured fusion model based on feature aggregation for the deraining problem.
Single image rain streaks removal is extremely important since rainy images adversely affect many computer vision systems.
We incorporate domain-specific knowledge to design our PanNet architecture by focusing on the two aims of the pan-sharpening problem: spectral and spatial preservation.
We propose a new deep network architecture for removing rain streaks from individual images based on the deep convolutional neural network (CNN).
We introduce a deep network architecture called DerainNet for removing rain streaks from an image.
Ranked #9 on Single Image Deraining on Rain100L
We propose a weighted variational model to estimate both the reflectance and the illumination from an observed image.
Ranked #3 on Low-Light Image Enhancement on MEF
The size of the dictionary and the patch-specific sparsity pattern are inferred from the data, in addition to other dictionary learning variables.