no code implementations • 19 Apr 2024 • Chengxu Liu, Xuan Wang, Xiangyu Xu, Ruhao Tian, Shuai Li, Xueming Qian, Ming-Hsuan Yang
In particular, we use a motion estimation network to capture motion information from neighborhoods, thereby adaptively estimating spatially-variant motion flow, mask, kernels, weights, and offsets to obtain the MISC Filter.
1 code implementation • 8 Mar 2024 • Chengxu Liu, Xuan Wang, Yuanting Fan, Shuai Li, Xueming Qian
The pixel array of light-emitting diodes used for display diffracts and attenuates incident light, causing various degradations as the light intensity changes.
no code implementations • ICCV 2023 • Chengxu Liu, Xuan Wang, Shuai Li, Yuzhi Wang, Xueming Qian
In this paper, we introduce a new perspective to handle various diffraction in UDC images by jointly exploring the feature restoration in the frequency and spatial domains, and present a Frequency and Spatial Interactive Learning Network (FSI).
no code implementations • ICCV 2023 • Changlong Gao, Chengxu Liu, Yujie Dun, Xueming Qian
For better category-level feature alignment, we propose a novel DAOD framework of joint category and scale information, dubbed CSDA, such a design enables effective object learning for different scales.
no code implementations • 5 Sep 2022 • Chengxu Liu, Huan Yang, Jianlong Fu, Xueming Qian
In particular, we first introduce a lightweight context encoder and a parameter encoder to learn a context map for the pixel-level category and a group of image-adaptive coefficients, respectively.
Ranked #7 on Image Enhancement on MIT-Adobe 5k (SSIM on proRGB metric)
no code implementations • 19 Jul 2022 • Chengxu Liu, Huan Yang, Jianlong Fu, Xueming Qian
In particular, we formulate the warped features with inconsistent motions as query tokens, and formulate relevant regions in a motion trajectory from two original consecutive frames into keys and values.
1 code implementation • CVPR 2022 • Chengxu Liu, Huan Yang, Jianlong Fu, Xueming Qian
Existing approaches usually align and aggregate video frames from limited adjacent frames (e. g., 5 or 7 frames), which prevents these approaches from satisfactory results.
Ranked #4 on Video Super-Resolution on UDM10 - 4x upscaling