1 code implementation • 30 Nov 2023 • Yudong Wang, Jichang Guo, Wanru He, Huan Gao, Huihui Yue, Zenan Zhang, Chongyi Li
Coupled with 7 object detection models retrained using raw underwater images, we employ these 133 models to comprehensively analyze the effect of underwater image enhancement on underwater object detection.
1 code implementation • 16 Oct 2023 • Ruiqi Wu, Liangyu Chen, Tong Yang, Chunle Guo, Chongyi Li, Xiangyu Zhang
Specifically, we design a first-frame-conditioned pipeline that uses an off-the-shelf text-to-image model for content generation so that our tuned video diffusion model mainly focuses on motion learning.
1 code implementation • ICCV 2023 • Shangchen Zhou, Chongyi Li, Kelvin C. K. Chan, Chen Change Loy
We also propose a mask-guided sparse video Transformer, which achieves high efficiency by discarding unnecessary and redundant tokens.
Ranked #1 on
Video Inpainting
on DAVIS
1 code implementation • ICCV 2023 • Naishan Zheng, Man Zhou, Yanmeng Dong, Xiangyu Rui, Jie Huang, Chongyi Li, Feng Zhao
In this work, we propose a paradigm for low-light image enhancement that explores the potential of customized learnable priors to improve the transparency of the deep unfolding paradigm.
1 code implementation • 31 Aug 2023 • Yuyan Zhou, Dong Liang, Songcan Chen, Sheng-Jun Huang, Shuo Yang, Chongyi Li
In this paper, we propose a solution to improve the performance of lens flare removal by revisiting the ISP and remodeling the principle of automatic exposure in the synthesis pipeline and design a more reliable light sources recovery strategy.
no code implementations • ICCV 2023 • Man Zhou, Jie Huang, Naishan Zheng, Chongyi Li
Such designs penetrate the image reasoning prior into deep unfolding networks while improving its interpretability and representation capability.
no code implementations • 23 Aug 2023 • Jingchun Zhou, Zongxin He, Kin-Man Lam, Yudong Wang, Weishi Zhang, Chunle Guo, Chongyi Li
In this paper, we present a novel Amplitude-Modulated Stochastic Perturbation and Vortex Convolutional Network, AMSP-UOD, designed for underwater object detection.
no code implementations • 23 Aug 2023 • Dehuan Zhang, Jingchun Zhou, Weishi Zhang, Chunle Guo, Chongyi Li
ASISF improves the multiscale detail refinement while reducing interference from irrelevant scene information from the low-degradation stage.
1 code implementation • ICCV 2023 • Xin Jin, Jia-Wen Xiao, Ling-Hao Han, Chunle Guo, Ruixun Zhang, Xialei Liu, Chongyi Li
Calibration-based methods have dominated RAW image denoising under extremely low-light environments.
Ranked #1 on
Image Denoising
on SID SonyA7S2 x300
no code implementations • 1 Aug 2023 • Jinghao Zhang, Jie Huang, Man Zhou, Chongyi Li, Feng Zhao
Learning to restore multiple image degradations within a single model is quite beneficial for real-world applications.
1 code implementation • 26 Jun 2023 • Zhiqiang Yan, Yupeng Zheng, Chongyi Li, Jun Li, Jian Yang
Depth completion is the task of recovering dense depth maps from sparse ones, usually with the help of color images.
no code implementations • 25 Jun 2023 • Haoying Li, Jixin Zhao, Shangchen Zhou, Huajun Feng, Chongyi Li, Chen Change Loy
Existing image deblurring methods predominantly focus on global deblurring, inadvertently affecting the sharpness of backgrounds in locally blurred images and wasting unnecessary computation on sharp pixels, especially for high-resolution images.
1 code implementation • 15 Jun 2023 • Runmin Cong, Wenyu Yang, Wei zhang, Chongyi Li, Chun-Le Guo, Qingming Huang, Sam Kwong
Among existing UIE methods, Generative Adversarial Networks (GANs) based methods perform well in visual aesthetics, while the physical model-based methods have better scene adaptability.
1 code implementation • 7 Jun 2023 • Yuekun Dai, Chongyi Li, Shangchen Zhou, Ruicheng Feng, Yihang Luo, Chen Change Loy
To address this issue, we additionally provide the annotations of light sources in Flare7K++ and propose a new end-to-end pipeline to preserve the light source while removing lens flares.
no code implementations • 23 May 2023 • Yuekun Dai, Chongyi Li, Shangchen Zhou, Ruicheng Feng, Qingpeng Zhu, Qianhui Sun, Wenxiu Sun, Chen Change Loy, Jinwei Gu
In this paper, we summarize and review the Nighttime Flare Removal track on MIPI 2023.
no code implementations • 6 May 2023 • Xin Lin, Jingtong Yue, Sixian Ding, Chao Ren, Chun-Le Guo, Chongyi Li
P-Net can learn degradation feature vectors on the dark and light areas separately, using contrastive learning to guide the image restoration process.
no code implementations • 27 Apr 2023 • Qingpeng Zhu, Wenxiu Sun, Yuekun Dai, Chongyi Li, Shangchen Zhou, Ruicheng Feng, Qianhui Sun, Chen Change Loy, Jinwei Gu, Yi Yu, Yangke Huang, Kang Zhang, Meiya Chen, Yu Wang, Yongchao Li, Hao Jiang, Amrit Kumar Muduli, Vikash Kumar, Kunal Swami, Pankaj Kumar Bajpai, Yunchao Ma, Jiajun Xiao, Zhi Ling
To evaluate the performance of different depth completion methods, we organized an RGB+sparse ToF depth completion competition.
no code implementations • 20 Apr 2023 • Qianhui Sun, Qingyu Yang, Chongyi Li, Shangchen Zhou, Ruicheng Feng, Yuekun Dai, Wenxiu Sun, Qingpeng Zhu, Chen Change Loy, Jinwei Gu
Developing and integrating advanced image sensors with novel algorithms in camera systems are prevalent with the increasing demand for computational photography and imaging on mobile platforms.
no code implementations • 20 Apr 2023 • Qianhui Sun, Qingyu Yang, Chongyi Li, Shangchen Zhou, Ruicheng Feng, Yuekun Dai, Wenxiu Sun, Qingpeng Zhu, Chen Change Loy, Jinwei Gu
Developing and integrating advanced image sensors with novel algorithms in camera systems are prevalent with the increasing demand for computational photography and imaging on mobile platforms.
1 code implementation • CVPR 2023 • Yuhui Wu, Chen Pan, Guoqing Wang, Yang Yang, Jiwei Wei, Chongyi Li, Heng Tao Shen
To address this issue, we propose a novel semantic-aware knowledge-guided framework (SKF) that can assist a low-light enhancement model in learning rich and diverse priors encapsulated in a semantic segmentation model.
Ranked #3 on
Low-Light Image Enhancement
on LOLv2
1 code implementation • CVPR 2023 • Ruicheng Feng, Chongyi Li, Huaijin Chen, Shuai Li, Jinwei Gu, Chen Change Loy
Due to the difficulty in collecting large-scale and perfectly aligned paired training data for Under-Display Camera (UDC) image restoration, previous methods resort to monitor-based image systems or simulation-based methods, sacrificing the realness of the data and introducing domain gaps.
no code implementations • ICCV 2023 • Zhexin Liang, Chongyi Li, Shangchen Zhou, Ruicheng Feng, Chen Change Loy
To solve this issue, we devise a prompt learning framework that first learns an initial prompt pair by constraining the text-image similarity between the prompt (negative/positive sample) and the corresponding image (backlit image/well-lit image) in the CLIP latent space.
no code implementations • 29 Mar 2023 • Man Zhou, Naishan Zheng, Jie Huang, Chunle Guo, Chongyi Li
We investigate the efficacy of our belief from three perspectives: 1) from task-customized MAE to native MAE, 2) from image task to video task, and 3) from transformer structure to convolution neural network structure.
no code implementations • 29 Mar 2023 • Man Zhou, Naishan Zheng, Jie Huang, Xiangyu Rui, Chunle Guo, Deyu Meng, Chongyi Li, Jinwei Gu
In this paper, orthogonal to the existing data and model studies, we instead resort our efforts to investigate the potential of loss function in a new perspective and present our belief ``Random Weights Networks can Be Acted as Loss Prior Constraint for Image Restoration''.
1 code implementation • CVPR 2023 • Yuekun Dai, Yihang Luo, Shangchen Zhou, Chongyi Li, Chen Change Loy
With the dataset, neural networks can be trained to remove the reflective flares effectively.
no code implementations • 23 Feb 2023 • Chongyi Li, Chun-Le Guo, Man Zhou, Zhexin Liang, Shangchen Zhou, Ruicheng Feng, Chen Change Loy
Our approach is motivated by a few unique characteristics in the Fourier domain: 1) most luminance information concentrates on amplitudes while noise is closely related to phases, and 2) a high-resolution image and its low-resolution version share similar amplitude patterns. Through embedding Fourier into our network, the amplitude and phase of a low-light image are separately processed to avoid amplifying noise when enhancing luminance.
Low-Light Image Enhancement
Vocal Bursts Intensity Prediction
1 code implementation • ICCV 2023 • Fu-Zhao Ou, Baoliang Chen, Chongyi Li, Shiqi Wang, Sam Kwong
Furthermore, we design an easy-to-hard training scheduler based on the inter-domain uncertainty and intra-domain quality margin as well as the ranking-based domain adversarial network to enhance the effectiveness of transfer learning and further reduce the source risk in domain adaptation.
1 code implementation • CVPR 2023 • Xin Jin, Ling-Hao Han, Zhen Li, Chun-Le Guo, Zhi Chai, Chongyi Li
The exclusive properties of RAW data have shown great potential for low-light image enhancement.
1 code implementation • ICCV 2023 • Yuyan Zhou, Dong Liang, Songcan Chen, Sheng-Jun Huang, Shuo Yang, Chongyi Li
In this paper, we propose a solution to improve the performance of lens flare removal by revisiting the ISP and remodeling the principle of automatic exposure in the synthesis pipeline and design a more reliable light sources recovery strategy.
no code implementations • ICCV 2023 • Qi Zhu, Man Zhou, Naishan Zheng, Chongyi Li, Jie Huang, Feng Zhao
Video deblurring aims to restore the latent video frames from their blurred counterparts.
no code implementations • 12 Dec 2022 • Qixin Yan, Chunle Guo, Jixin Zhao, Yuekun Dai, Chen Change Loy, Chongyi Li
The key insights of this study are modeling component-specific correspondence for local makeup transfer, capturing long-range dependencies for global makeup transfer, and enabling efficient makeup transfer via a single-path structure.
no code implementations • 15 Oct 2022 • Keyu Yan, Man Zhou, Jie Huang, Feng Zhao, Chengjun Xie, Chongyi Li, Danfeng Hong
Panchromatic (PAN) and multi-spectral (MS) image fusion, named Pan-sharpening, refers to super-resolve the low-resolution (LR) multi-spectral (MS) images in the spatial domain to generate the expected high-resolution (HR) MS images, conditioning on the corresponding high-resolution PAN images.
1 code implementation • 12 Oct 2022 • Yuekun Dai, Chongyi Li, Shangchen Zhou, Ruicheng Feng, Chen Change Loy
In this paper, we introduce, Flare7K, the first nighttime flare removal dataset, which is generated based on the observation and statistics of real-world nighttime lens flares.
Ranked #2 on
Flare Removal
on Flare7K
1 code implementation • 11 Oct 2022 • Man Zhou, Hu Yu, Jie Huang, Feng Zhao, Jinwei Gu, Chen Change Loy, Deyu Meng, Chongyi Li
Existing convolutional neural networks widely adopt spatial down-/up-sampling for multi-scale modeling.
3 code implementations • 6 Oct 2022 • Runmin Cong, Qinwei Lin, Chen Zhang, Chongyi Li, Xiaochun Cao, Qingming Huang, Yao Zhao
Focusing on the issue of how to effectively capture and utilize cross-modality information in RGB-D salient object detection (SOD) task, we present a convolutional neural network (CNN) model, named CIR-Net, based on the novel cross-modality interaction and refinement.
1 code implementation • 15 Sep 2022 • Qingyu Yang, Guang Yang, Jun Jiang, Chongyi Li, Ruicheng Feng, Shangchen Zhou, Wenxiu Sun, Qingpeng Zhu, Chen Change Loy, Jinwei Gu
A detailed description of all models developed in this challenge is provided in this paper.
1 code implementation • 15 Sep 2022 • Qingyu Yang, Guang Yang, Jun Jiang, Chongyi Li, Ruicheng Feng, Shangchen Zhou, Wenxiu Sun, Qingpeng Zhu, Chen Change Loy, Jinwei Gu
A detailed description of all models developed in this challenge is provided in this paper.
1 code implementation • 15 Sep 2022 • Ruicheng Feng, Chongyi Li, Shangchen Zhou, Wenxiu Sun, Qingpeng Zhu, Jun Jiang, Qingyu Yang, Chen Change Loy, Jinwei Gu
In this paper, we summarize and review the Under-Display Camera (UDC) Image Restoration track on MIPI 2022.
1 code implementation • 15 Sep 2022 • Wenxiu Sun, Qingpeng Zhu, Chongyi Li, Ruicheng Feng, Shangchen Zhou, Jun Jiang, Qingyu Yang, Chen Change Loy, Jinwei Gu
A detailed description of all models developed in this challenge is provided in this paper.
1 code implementation • 15 Sep 2022 • Qingyu Yang, Guang Yang, Jun Jiang, Chongyi Li, Ruicheng Feng, Shangchen Zhou, Wenxiu Sun, Qingpeng Zhu, Chen Change Loy, Jinwei Gu
A detailed description of all models developed in this challenge is provided in this paper.
1 code implementation • 14 Aug 2022 • Chunle Guo, Ruiqi Wu, Xin Jin, Linghao Han, Zhi Chai, Weidong Zhang, Chongyi Li
To achieve that, we also contribute a dataset, URankerSet, containing sufficient results enhanced by different UIE algorithms and the corresponding perceptual rankings, to train our URanker.
no code implementations • 28 Jul 2022 • Chongyi Li, Chunle Guo, Ruicheng Feng, Shangchen Zhou, Chen Change Loy
Our method inherits the zero-reference learning and curve-based framework from an effective low-light image enhancement method, Zero-DCE, with further speed up in its inference speed, reduction in its model size, and extension to controllable exposure adjustment.
1 code implementation • 22 Jun 2022 • Shangchen Zhou, Kelvin C. K. Chan, Chongyi Li, Chen Change Loy
In this paper, we demonstrate that a learned discrete codebook prior in a small proxy space largely reduces the uncertainty and ambiguity of restoration mapping by casting blind face restoration as a code prediction task, while providing rich visual atoms for generating high-quality faces.
Ranked #1 on
Blind Face Restoration
on CelebA-Test
2 code implementations • 19 Apr 2022 • Runmin Cong, Ning Yang, Chongyi Li, Huazhu Fu, Yao Zhao, Qingming Huang, Sam Kwong
In this paper, we propose a global-and-local collaborative learning architecture, which includes a global correspondence modeling (GCM) and a local correspondence modeling (LCM) to capture comprehensive inter-image corresponding relationship among different images from the global and local perspectives.
1 code implementation • 7 Feb 2022 • Shangchen Zhou, Chongyi Li, Chen Change Loy
With the pipeline, we present the first large-scale dataset for joint low-light enhancement and deblurring.
2 code implementations • CVPR 2022 • Chun-Le Guo, Qixin Yan, Saeed Anwar, Runmin Cong, Wenqi Ren, Chongyi Li
Though Transformer has occupied various computer vision tasks, directly leveraging Transformer for image dehazing is challenging: 1) it tends to result in ambiguous and coarse details that are undesired for image reconstruction; 2) previous position embedding of Transformer is provided in logic or spatial position order that neglects the variational haze densities, which results in the sub-optimal dehazing performance.
no code implementations • 27 Dec 2021 • Hai-Han Sun, Yee Hui Lee, Qiqi Dai, Chongyi Li, Genevieve Ow, Mohamed Lokman Mohd Yusof, Abdulkadir C. Yucel
However, the task of estimating root-related parameters is challenging as the root reflection is a complex function of multiple root parameters and root orientations.
1 code implementation • 2 Aug 2021 • Shi Qiu, Yunfan Wu, Saeed Anwar, Chongyi Li
Object detection in three-dimensional (3D) space attracts much interest from academia and industry since it is an essential task in AI-driven applications such as robotics, autonomous driving, and augmented reality.
5 code implementations • 27 Apr 2021 • Chongyi Li, Saeed Anwar, Junhui Hou, Runmin Cong, Chunle Guo, Wenqi Ren
As a result, our network can effectively improve the visual quality of underwater images by exploiting multiple color spaces embedding and the advantages of both physical model-based and learning-based methods.
Ranked #2 on
Underwater Image Restoration
on LSUI
(using extra training data)
3 code implementations • 21 Apr 2021 • Chongyi Li, Chunle Guo, Linghao Han, Jun Jiang, Ming-Ming Cheng, Jinwei Gu, Chen Change Loy
Low-light image enhancement (LLIE) aims at improving the perception or interpretability of an image captured in an environment with poor illumination.
1 code implementation • CVPR 2021 • Ruicheng Feng, Chongyi Li, Huaijin Chen, Shuai Li, Chen Change Loy, Jinwei Gu
Recent development of Under-Display Camera (UDC) systems provides a true bezel-less and notch-free viewing experience on smartphones (and TV, laptops, tablets), while allowing images to be captured from the selfie camera embedded underneath.
4 code implementations • 1 Mar 2021 • Chongyi Li, Chunle Guo, Chen Change Loy
This paper presents a novel method, Zero-Reference Deep Curve Estimation (Zero-DCE), which formulates light enhancement as a task of image-specific curve estimation with a deep network.
no code implementations • 29 Jan 2021 • Hai-Han Sun, Yee Hui Lee, Chongyi Li, Genevieve Ow, Mohamed Lokman Mohd Yusof, Abdulkadir C. Yucel
The horizontal orientation angle and vertical inclination angle of an elongated subsurface object are key parameters for object identification and imaging in ground penetrating radar (GPR) applications.
3 code implementations • 26 Nov 2020 • Qijian Zhang, Runmin Cong, Chongyi Li, Ming-Ming Cheng, Yuming Fang, Xiaochun Cao, Yao Zhao, Sam Kwong
Despite the remarkable advances in visual saliency analysis for natural scene images (NSIs), salient object detection (SOD) for optical remote sensing images (RSIs) still remains an open and challenging problem.
1 code implementation • NeurIPS 2020 • Qijian Zhang, Runmin Cong, Junhui Hou, Chongyi Li, Yao Zhao
In the first stage, we propose a group-attentional semantic aggregation module that models inter-image relationships to generate the group-wise semantic representations.
no code implementations • 26 Oct 2020 • Chongyi Li, Chunle Guo, Qiming Ai, Shangchen Zhou, Chen Change Loy
This paper presents a new method, called FlexiCurve, for photo enhancement.
no code implementations • 2 Oct 2020 • Chongyi Li, Runmin Cong, Chunle Guo, Hua Li, Chunjie Zhang, Feng Zheng, Yao Zhao
In this paper, we propose a novel Parallel Down-up Fusion network (PDF-Net) for SOD in optical RSIs, which takes full advantage of the in-path low- and high-level features and cross-path multi-resolution features to distinguish diversely scaled salient objects and suppress the cluttered backgrounds.
1 code implementation • 25 Aug 2020 • Saeed Anwar, Muhammad Tahir, Chongyi Li, Ajmal Mian, Fahad Shahbaz Khan, Abdul Wahab Muzaffar
Image colorization is the process of estimating RGB colors for grayscale images or video frames to improve their aesthetic and perceptual quality.
no code implementations • 7 Aug 2020 • Chongyi Li, Huazhu Fu, Runmin Cong, Zechao Li, Qianqian Xu
We further demonstrate the advantages of the proposed method for improving the accuracy of retinal vessel segmentation.
1 code implementation • ECCV 2020 • Chongyi Li, Runmin Cong, Yongri Piao, Qianqian Xu, Chen Change Loy
Second, we propose an adaptive feature selection (AFS) module to select saliency-related features and suppress the inferior ones.
Ranked #8 on
RGB-D Salient Object Detection
on NJU2K
9 code implementations • CVPR 2020 • Chunle Guo, Chongyi Li, Jichang Guo, Chen Change Loy, Junhui Hou, Sam Kwong, Runmin Cong
The paper presents a novel method, Zero-Reference Deep Curve Estimation (Zero-DCE), which formulates light enhancement as a task of image-specific curve estimation with a deep network.
Ranked #1 on
Color Constancy
on INTEL-TUT2
no code implementations • ICLR 2020 • Miao Yang and Ke Hu, Chongyi Li, Zhiqiang Wei
By substituting the inception module with the I-A module, the Inception-ResnetV2 network achieves a 10. 7% top1 error rate and a 0% top5 error rate on the subset of ILSVRC-2012, which further illustrates the function of the background attention in the image classifications.
no code implementations • 17 Jul 2019 • Saeed Anwar, Chongyi Li
In this paper, our main aim is two-fold, 1): to provide a comprehensive and in-depth survey of the deep learning-based underwater image enhancement, which covers various perspectives ranging from algorithms to open issues, and 2): to conduct a qualitative and quantitative comparison of the deep algorithms on diverse datasets to serve as a benchmark, which has been barely explored before.
no code implementations • 20 Jun 2019 • Chongyi Li, Runmin Cong, Junhui Hou, Sanyi Zhang, Yue Qian, Sam Kwong
Arising from the various object types and scales, diverse imaging orientations, and cluttered backgrounds in optical remote sensing image (RSI), it is difficult to directly extend the success of salient object detection for nature scene image to the optical RSI.
1 code implementation • 11 Jan 2019 • Chongyi Li, Chunle Guo, Wenqi Ren, Runmin Cong, Junhui Hou, Sam Kwong, DaCheng Tao
In this paper, we construct an Underwater Image Enhancement Benchmark (UIEB) including 950 real-world underwater images, 890 of which have the corresponding reference images.
Ranked #5 on
Underwater Image Restoration
on LSUI
(using extra training data)
2 code implementations • 10 Jul 2018 • Saeed Anwar, Chongyi Li, Fatih Porikli
In an underwater scene, wavelength-dependent light absorption and scattering degrade the visibility of images, causing low contrast and distorted color casts.
no code implementations • 21 Mar 2018 • Chongyi Li, Jichang Guo, Fatih Porikli, Huazhu Fu, Yanwei Pang
Different from previous learning-based methods, we propose a flexible cascaded CNN for single hazy image restoration, which considers the medium transmission and global atmospheric light jointly by two task-driven subnetworks.
no code implementations • 2 Dec 2017 • Chongyi Li, Jichang Guo, Fatih Porikli, Chunle Guo, Huzhu Fu, Xi Li
Despite the recent progress in image dehazing, several problems remain largely unsolved such as robustness for varying scenes, the visual quality of reconstructed images, and effectiveness and flexibility for applications.
no code implementations • 19 Oct 2017 • Chongyi Li, Jichang Guo, Chunle Guo
Underwater vision suffers from severe effects due to selective attenuation and scattering when light propagates through water.