Video Enhancement
39 papers with code • 1 benchmarks • 5 datasets
Latest papers
Dancing in the Dark: A Benchmark towards General Low-light Video Enhancement
To address this issue, we design a camera system and collect a high-quality low-light video dataset with multiple exposures and cameras.
NIR-assisted Video Enhancement via Unpaired 24-hour Data
In this paper, we defend the feasibility and superiority of NIR-assisted low-light video enhancement results by using unpaired 24-hour data for the first time, which significantly eases data collection and improves generalization performance on in-the-wild data.
Learning Spatiotemporal Frequency-Transformer for Low-Quality Video Super-Resolution
Video Super-Resolution (VSR) aims to restore high-resolution (HR) videos from low-resolution (LR) videos.
Low-Light Image and Video Enhancement: A Comprehensive Survey and Beyond
This paper presents a comprehensive survey of low-light image and video enhancement, addressing two primary challenges in the field.
Low Light Video Enhancement by Learning on Static Videos with Cross-Frame Attention
The design of deep learning methods for low light video enhancement remains a challenging problem owing to the difficulty in capturing low light and ground truth video pairs.
Learning Spatiotemporal Frequency-Transformer for Compressed Video Super-Resolution
First, we divide a video frame into patches, and transform each patch into DCT spectral maps in which each channel represents a frequency band.
Towards Interpretable Video Super-Resolution via Alternating Optimization
These issues can be alleviated by a cascade of three separate sub-tasks, including video deblurring, frame interpolation, and super-resolution, which, however, would fail to capture the spatial and temporal correlations among video sequences.
Unsupervised Flow-Aligned Sequence-to-Sequence Learning for Video Restoration
On the other hand, we equip the sequence-to-sequence model with an unsupervised optical flow estimator to maximize its potential.
Unifying Motion Deblurring and Frame Interpolation with Events
Slow shutter speed and long exposure time of frame-based cameras often cause visual blur and loss of inter-frame information, degenerating the overall quality of captured videos.
DeMFI: Deep Joint Deblurring and Multi-Frame Interpolation with Flow-Guided Attentive Correlation and Recursive Boosting
In this paper, we propose a novel joint deblurring and multi-frame interpolation (DeMFI) framework, called DeMFI-Net, which accurately converts blurry videos of lower-frame-rate to sharp videos at higher-frame-rate based on flow-guided attentive-correlation-based feature bolstering (FAC-FB) module and recursive boosting (RB), in terms of multi-frame interpolation (MFI).