no code implementations • 9 Jan 2025 • Siyu Liu, Zheng-Peng Duan, Jia Ouyang, Jiayi Fu, Hyunhee Park, Zikun Liu, Chun-Le Guo, Chongyi Li
In this paper, we propose a personalized face restoration method, FaceMe, based on a diffusion model.
1 code implementation • 24 Dec 2024 • Shuhao Han, Haotian Fan, Jiachen Fu, Liang Li, Tao Li, Junhui Cui, Yunqiu Wang, Yang Tai, Jingwei Sun, Chunle Guo, Chongyi Li
This allows us to comprehensively evaluate the effectiveness of image-text alignment metrics for T2I models.
1 code implementation • 25 Oct 2024 • Yuekun Dai, Qinyue Li, Shangchen Zhou, Yihang Luo, Chongyi Li, Chen Change Loy
This process, often called paint bucket colorization, involves two main tasks: keyframe colorization, where colors are applied according to the character's color design sheet, and consecutive frame colorization, where these colors are replicated across adjacent frames.
1 code implementation • 8 Oct 2024 • Long Chen, Yuzhi Huang, Junyu Dong, Qi Xu, Sam Kwong, Huimin Lu, Huchuan Lu, Chongyi Li
In this survey, we first categorise existing algorithms into traditional machine learning-based methods and deep learning-based methods, and summarise them by considering learning strategy, experimental dataset, utilised features or frameworks, and learning stage.
1 code implementation • 28 Sep 2024 • Chu-Jie Qin, Rui-Qi Wu, Zikun Liu, Xin Lin, Chun-Le Guo, Hyun Hee Park, Chongyi Li
Our pipeline consists of two stages: masked image pre-training and fine-tuning with mask attribute conductance.
no code implementations • 9 Aug 2024 • Ruicheng Feng, Chongyi Li, Chen Change Loy
Extensive experiments demonstrate the effectiveness of our method in capturing facial details consistently across video frames.
no code implementations • 4 Jul 2024 • Zheng-Peng Duan, Jiawei Zhang, Zheng Lin, Xin Jin, Dongqing Zou, Chunle Guo, Chongyi Li
Image retouching aims to enhance the visual quality of photos.
no code implementations • 11 Jun 2024 • Xin Jin, Chunle Guo, Xiaoming Li, Zongsheng Yue, Chongyi Li, Shangchen Zhou, Ruicheng Feng, Yuekun Dai, Peiqing Yang, Chen Change Loy, Ruoqi Li, Chang Liu, Ziyi Wang, Yao Du, Jingjing Yang, Long Bao, Heng Sun, Xiangyu Kong, Xiaoxia Xing, Jinlong Wu, Yuanyang Xue, Hyunhee Park, Sejun Song, Changho Kim, Jingfan Tan, Wenhan Luo, Zikun Liu, Mingde Qiao, Junjun Jiang, Kui Jiang, Yao Xiao, Chuyang Sun, Jinhui Hu, Weijian Ruan, Yubo Dong, Kai Chen, Hyejeong Jo, Jiahao Qin, Bingjie Han, Pinle Qin, Rui Chai, Pengyuan Wang
The increasing demand for computational photography and imaging on mobile platforms has led to the widespread development and integration of advanced image sensors with novel algorithms in camera systems.
2 code implementations • 10 Jun 2024 • Xin Jin, Pengyi Jiao, Zheng-Peng Duan, Xingchao Yang, Chun-Le Guo, Bo Ren, Chongyi Li
Volumetric rendering based methods, like NeRF, excel in HDR view synthesis from RAWimages, especially for nighttime scenes.
1 code implementation • 8 May 2024 • Yaqi Wu, Zhihao Fan, Xiaofeng Chu, Jimmy S. Ren, Xiaoming Li, Zongsheng Yue, Chongyi Li, Shangcheng Zhou, Ruicheng Feng, Yuekun Dai, Peiqing Yang, Chen Change Loy, Senyan Xu, Zhijing Sun, Jiaying Zhu, Yurui Zhu, Xueyang Fu, Zheng-Jun Zha, Jun Cao, Cheng Li, Shu Chen, Liang Ma, Shiyang Zhou, Haijin Zeng, Kai Feng, Yongyong Chen, Jingyong Su, Xianyu Guan, Hongyuan Yu, Cheng Wan, Jiamin Lin, Binnan Han, Yajun Zou, Zhuoyuan Wu, Yuan Huang, Yongsheng Yu, Daoan Zhang, Jizhe Li, Xuanwu Yin, Kunlong Zuo, Yunfan Lu, Yijie Xu, Wenzong Ma, Weiyu Guo, Hui Xiong, Wei Yu, Bingchun Luo, Sabari Nathan, Priya Kansal
The increasing demand for computational photography and imaging on mobile platforms has led to the widespread development and integration of advanced image sensors with novel algorithms in camera systems.
no code implementations • 30 Apr 2024 • Yuekun Dai, Dafeng Zhang, Xiaoming Li, Zongsheng Yue, Chongyi Li, Shangchen Zhou, Ruicheng Feng, Peiqing Yang, Zhezhu Jin, Guanqun Liu, Chen Change Loy, Lize Zhang, Shuai Liu, Chaoyu Feng, Luyang Wang, Shuan Chen, Guangqi Shao, Xiaotao Wang, Lei Lei, Qirui Yang, Qihua Cheng, Zhiqiang Xu, Yihao Liu, Huanjing Yue, Jingyu Yang, Florin-Alexandru Vasluianu, Zongwei Wu, George Ciubotariu, Radu Timofte, Zhao Zhang, Suiyi Zhao, Bo wang, Zhichao Zuo, Yanyan Wei, Kuppa Sai Sri Teja, Jayakar Reddy A, Girish Rongali, Kaushik Mitra, Zhihao Ma, Yongxu Liu, Wanying Zhang, Wei Shang, Yuhong He, Long Peng, Zhongxin Yu, Shaofei Luo, Jian Wang, Yuqi Miao, Baiang Li, Gang Wei, Rakshank Verma, Ritik Maheshwari, Rahul Tekchandani, Praful Hambarde, Satya Narayan Tazi, Santosh Kumar Vipparthi, Subrahmanyam Murala, Haopeng Zhang, Yingli Hou, Mingde Yao, Levin M S, Aniruth Sundararajan, Hari Kumar A
The increasing demand for computational photography and imaging on mobile platforms has led to the widespread development and integration of advanced image sensors with novel algorithms in camera systems.
1 code implementation • CVPR 2024 • Yuekun Dai, Shangchen Zhou, Qinyue Li, Chongyi Li, Chen Change Loy
In this work, we introduce a new learning-based inclusion matching pipeline, which directs the network to comprehend the inclusion relationships between segments rather than relying solely on direct visual correspondences.
no code implementations • 16 Feb 2024 • Zhexin Liang, Zhaochen Li, Shangchen Zhou, Chongyi Li, Chen Change Loy
We also introduce a novel module based on self-attention and a content-guided deformable autoencoder to address the long-standing issues of color overflow and inaccurate coloring.
1 code implementation • 16 Jan 2024 • Xinni Jiang, Zengsheng Kuang, Chunle Guo, Ruixun Zhang, Lei Cai, Xiao Fan, Chongyi Li
Guided depth super-resolution (GDSR) involves restoring missing depth details using the high-resolution RGB image of the same scene.
no code implementations • CVPR 2024 • Ruiqi Wu, Liangyu Chen, Tong Yang, Chunle Guo, Chongyi Li, Xiangyu Zhang
In this paper we present a few-shot text-to-video framework LAMP which enables a text-to-image diffusion model to Learn A specific Motion Pattern with 8 16 videos on a single GPU.
1 code implementation • CVPR 2024 • Fu-Zhao Ou, Chongyi Li, Shiqi Wang, Sam Kwong
Furthermore to alleviate the issue of the model placing excessive trust in inaccurate quality anchors we propose a confidence calibration method to correct the quality distribution by exploiting to the fullest extent of these objective quality factors characterized as the merged-factor distribution during training.
1 code implementation • CVPR 2024 • Xiaoqian Lv, Shengping Zhang, Chenyang Wang, Yichen Zheng, Bineng Zhong, Chongyi Li, Liqiang Nie
Existing joint low-light enhancement and deblurring methods learn pixel-wise mappings from paired synthetic data which results in limited generalization in real-world scenes.
no code implementations • 20 Dec 2023 • Yuhui Wu, Guoqing Wang, Zhiwen Wang, Yang Yang, Tianyu Li, Malu Zhang, Chongyi Li, Heng Tao Shen
By treating Retinex- and semantic-based priors as the condition, JoReS-Diff presents a unique perspective for establishing an diffusion model for LLIE and similar image enhancement tasks.
1 code implementation • 30 Nov 2023 • Yudong Wang, Jichang Guo, Wanru He, Huan Gao, Huihui Yue, Zenan Zhang, Chongyi Li
Coupled with 7 object detection models retrained using raw underwater images, we employ these 133 models to comprehensively analyze the effect of underwater image enhancement on underwater object detection.
1 code implementation • 16 Oct 2023 • Ruiqi Wu, Liangyu Chen, Tong Yang, Chunle Guo, Chongyi Li, Xiangyu Zhang
Specifically, we design a first-frame-conditioned pipeline that uses an off-the-shelf text-to-image model for content generation so that our tuned video diffusion model mainly focuses on motion learning.
3 code implementations • ICCV 2023 • Shangchen Zhou, Chongyi Li, Kelvin C. K. Chan, Chen Change Loy
We also propose a mask-guided sparse video Transformer, which achieves high efficiency by discarding unnecessary and redundant tokens.
Ranked #1 on
Video Inpainting
on YouTube-VOS 2018
1 code implementation • ICCV 2023 • Naishan Zheng, Man Zhou, Yanmeng Dong, Xiangyu Rui, Jie Huang, Chongyi Li, Feng Zhao
In this work, we propose a paradigm for low-light image enhancement that explores the potential of customized learnable priors to improve the transparency of the deep unfolding paradigm.
1 code implementation • 31 Aug 2023 • Yuyan Zhou, Dong Liang, Songcan Chen, Sheng-Jun Huang, Shuo Yang, Chongyi Li
In this paper, we propose a solution to improve the performance of lens flare removal by revisiting the ISP and remodeling the principle of automatic exposure in the synthesis pipeline and design a more reliable light sources recovery strategy.
no code implementations • ICCV 2023 • Man Zhou, Jie Huang, Naishan Zheng, Chongyi Li
Such designs penetrate the image reasoning prior into deep unfolding networks while improving its interpretability and representation capability.
1 code implementation • 23 Aug 2023 • Jingchun Zhou, Zongxin He, Kin-Man Lam, Yudong Wang, Weishi Zhang, Chunle Guo, Chongyi Li
In this paper, we present a novel Amplitude-Modulated Stochastic Perturbation and Vortex Convolutional Network, AMSP-UOD, designed for underwater object detection.
1 code implementation • 23 Aug 2023 • Dehuan Zhang, Jingchun Zhou, Chunle Guo, Weishi Zhang, Chongyi Li
Therefore, we present the synergistic multi-scale detail refinement via intrinsic supervision (SMDR-IS) for enhancing underwater scene details, which contain multi-stages.
1 code implementation • ICCV 2023 • Xin Jin, Jia-Wen Xiao, Ling-Hao Han, Chunle Guo, Xialei Liu, Chongyi Li, Ming-Ming Cheng
However, these methods are impeded by several critical limitations: a) the explicit calibration process is both labor- and time-intensive, b) challenge exists in transferring denoisers across different camera models, and c) the disparity between synthetic and real noise is exacerbated by digital gain.
Ranked #1 on
Image Denoising
on SID SonyA7S2 x300
1 code implementation • 26 Jun 2023 • Zhiqiang Yan, Yupeng Zheng, Chongyi Li, Jun Li, Jian Yang
Depth completion is the task of recovering dense depth maps from sparse ones, usually with the help of color images.
no code implementations • 25 Jun 2023 • Haoying Li, Jixin Zhao, Shangchen Zhou, Huajun Feng, Chongyi Li, Chen Change Loy
Existing image deblurring methods predominantly focus on global deblurring, inadvertently affecting the sharpness of backgrounds in locally blurred images and wasting unnecessary computation on sharp pixels, especially for high-resolution images.
1 code implementation • 15 Jun 2023 • Runmin Cong, Wenyu Yang, Wei zhang, Chongyi Li, Chun-Le Guo, Qingming Huang, Sam Kwong
Among existing UIE methods, Generative Adversarial Networks (GANs) based methods perform well in visual aesthetics, while the physical model-based methods have better scene adaptability.
1 code implementation • 7 Jun 2023 • Yuekun Dai, Chongyi Li, Shangchen Zhou, Ruicheng Feng, Yihang Luo, Chen Change Loy
To address this issue, we additionally provide the annotations of light sources in Flare7K++ and propose a new end-to-end pipeline to preserve the light source while removing lens flares.
no code implementations • 23 May 2023 • Yuekun Dai, Chongyi Li, Shangchen Zhou, Ruicheng Feng, Qingpeng Zhu, Qianhui Sun, Wenxiu Sun, Chen Change Loy, Jinwei Gu
In this paper, we summarize and review the Nighttime Flare Removal track on MIPI 2023.
no code implementations • 27 Apr 2023 • Qingpeng Zhu, Wenxiu Sun, Yuekun Dai, Chongyi Li, Shangchen Zhou, Ruicheng Feng, Qianhui Sun, Chen Change Loy, Jinwei Gu, Yi Yu, Yangke Huang, Kang Zhang, Meiya Chen, Yu Wang, Yongchao Li, Hao Jiang, Amrit Kumar Muduli, Vikash Kumar, Kunal Swami, Pankaj Kumar Bajpai, Yunchao Ma, Jiajun Xiao, Zhi Ling
To evaluate the performance of different depth completion methods, we organized an RGB+sparse ToF depth completion competition.
no code implementations • 20 Apr 2023 • Qianhui Sun, Qingyu Yang, Chongyi Li, Shangchen Zhou, Ruicheng Feng, Yuekun Dai, Wenxiu Sun, Qingpeng Zhu, Chen Change Loy, Jinwei Gu
Developing and integrating advanced image sensors with novel algorithms in camera systems are prevalent with the increasing demand for computational photography and imaging on mobile platforms.
no code implementations • 20 Apr 2023 • Qianhui Sun, Qingyu Yang, Chongyi Li, Shangchen Zhou, Ruicheng Feng, Yuekun Dai, Wenxiu Sun, Qingpeng Zhu, Chen Change Loy, Jinwei Gu
Developing and integrating advanced image sensors with novel algorithms in camera systems are prevalent with the increasing demand for computational photography and imaging on mobile platforms.
1 code implementation • CVPR 2023 • Yuhui Wu, Chen Pan, Guoqing Wang, Yang Yang, Jiwei Wei, Chongyi Li, Heng Tao Shen
To address this issue, we propose a novel semantic-aware knowledge-guided framework (SKF) that can assist a low-light enhancement model in learning rich and diverse priors encapsulated in a semantic segmentation model.
Ranked #5 on
Low-Light Image Enhancement
on LOLv2
1 code implementation • CVPR 2023 • Ruicheng Feng, Chongyi Li, Huaijin Chen, Shuai Li, Jinwei Gu, Chen Change Loy
Due to the difficulty in collecting large-scale and perfectly aligned paired training data for Under-Display Camera (UDC) image restoration, previous methods resort to monitor-based image systems or simulation-based methods, sacrificing the realness of the data and introducing domain gaps.
no code implementations • ICCV 2023 • Zhexin Liang, Chongyi Li, Shangchen Zhou, Ruicheng Feng, Chen Change Loy
To solve this issue, we devise a prompt learning framework that first learns an initial prompt pair by constraining the text-image similarity between the prompt (negative/positive sample) and the corresponding image (backlit image/well-lit image) in the CLIP latent space.
no code implementations • 29 Mar 2023 • Man Zhou, Naishan Zheng, Jie Huang, Chunle Guo, Chongyi Li
We investigate the efficacy of our belief from three perspectives: 1) from task-customized MAE to native MAE, 2) from image task to video task, and 3) from transformer structure to convolution neural network structure.
no code implementations • 29 Mar 2023 • Man Zhou, Naishan Zheng, Jie Huang, Xiangyu Rui, Chunle Guo, Deyu Meng, Chongyi Li, Jinwei Gu
In this paper, orthogonal to the existing data and model studies, we instead resort our efforts to investigate the potential of loss function in a new perspective and present our belief ``Random Weights Networks can Be Acted as Loss Prior Constraint for Image Restoration''.
1 code implementation • CVPR 2023 • Yuekun Dai, Yihang Luo, Shangchen Zhou, Chongyi Li, Chen Change Loy
With the dataset, neural networks can be trained to remove the reflective flares effectively.
no code implementations • 23 Feb 2023 • Chongyi Li, Chun-Le Guo, Man Zhou, Zhexin Liang, Shangchen Zhou, Ruicheng Feng, Chen Change Loy
Our approach is motivated by a few unique characteristics in the Fourier domain: 1) most luminance information concentrates on amplitudes while noise is closely related to phases, and 2) a high-resolution image and its low-resolution version share similar amplitude patterns. Through embedding Fourier into our network, the amplitude and phase of a low-light image are separately processed to avoid amplifying noise when enhancing luminance.
1 code implementation • ICCV 2023 • Yuyan Zhou, Dong Liang, Songcan Chen, Sheng-Jun Huang, Shuo Yang, Chongyi Li
In this paper, we propose a solution to improve the performance of lens flare removal by revisiting the ISP and remodeling the principle of automatic exposure in the synthesis pipeline and design a more reliable light sources recovery strategy.
1 code implementation • ICCV 2023 • Fu-Zhao Ou, Baoliang Chen, Chongyi Li, Shiqi Wang, Sam Kwong
Furthermore, we design an easy-to-hard training scheduler based on the inter-domain uncertainty and intra-domain quality margin as well as the ranking-based domain adversarial network to enhance the effectiveness of transfer learning and further reduce the source risk in domain adaptation.
no code implementations • ICCV 2023 • Qi Zhu, Man Zhou, Naishan Zheng, Chongyi Li, Jie Huang, Feng Zhao
Video deblurring aims to restore the latent video frames from their blurred counterparts.
1 code implementation • CVPR 2023 • Xin Jin, Ling-Hao Han, Zhen Li, Chun-Le Guo, Zhi Chai, Chongyi Li
The exclusive properties of RAW data have shown great potential for low-light image enhancement.
no code implementations • 12 Dec 2022 • Qixin Yan, Chunle Guo, Jixin Zhao, Yuekun Dai, Chen Change Loy, Chongyi Li
The key insights of this study are modeling component-specific correspondence for local makeup transfer, capturing long-range dependencies for global makeup transfer, and enabling efficient makeup transfer via a single-path structure.
no code implementations • 15 Oct 2022 • Keyu Yan, Man Zhou, Jie Huang, Feng Zhao, Chengjun Xie, Chongyi Li, Danfeng Hong
Panchromatic (PAN) and multi-spectral (MS) image fusion, named Pan-sharpening, refers to super-resolve the low-resolution (LR) multi-spectral (MS) images in the spatial domain to generate the expected high-resolution (HR) MS images, conditioning on the corresponding high-resolution PAN images.
1 code implementation • 12 Oct 2022 • Yuekun Dai, Chongyi Li, Shangchen Zhou, Ruicheng Feng, Chen Change Loy
In this paper, we introduce, Flare7K, the first nighttime flare removal dataset, which is generated based on the observation and statistics of real-world nighttime lens flares.
Ranked #3 on
Flare Removal
on Flare7K
1 code implementation • 11 Oct 2022 • Man Zhou, Hu Yu, Jie Huang, Feng Zhao, Jinwei Gu, Chen Change Loy, Deyu Meng, Chongyi Li
Existing convolutional neural networks widely adopt spatial down-/up-sampling for multi-scale modeling.
3 code implementations • 6 Oct 2022 • Runmin Cong, Qinwei Lin, Chen Zhang, Chongyi Li, Xiaochun Cao, Qingming Huang, Yao Zhao
Focusing on the issue of how to effectively capture and utilize cross-modality information in RGB-D salient object detection (SOD) task, we present a convolutional neural network (CNN) model, named CIR-Net, based on the novel cross-modality interaction and refinement.
1 code implementation • 15 Sep 2022 • Qingyu Yang, Guang Yang, Jun Jiang, Chongyi Li, Ruicheng Feng, Shangchen Zhou, Wenxiu Sun, Qingpeng Zhu, Chen Change Loy, Jinwei Gu
A detailed description of all models developed in this challenge is provided in this paper.
1 code implementation • 15 Sep 2022 • Qingyu Yang, Guang Yang, Jun Jiang, Chongyi Li, Ruicheng Feng, Shangchen Zhou, Wenxiu Sun, Qingpeng Zhu, Chen Change Loy, Jinwei Gu
A detailed description of all models developed in this challenge is provided in this paper.
1 code implementation • 15 Sep 2022 • Ruicheng Feng, Chongyi Li, Shangchen Zhou, Wenxiu Sun, Qingpeng Zhu, Jun Jiang, Qingyu Yang, Chen Change Loy, Jinwei Gu
In this paper, we summarize and review the Under-Display Camera (UDC) Image Restoration track on MIPI 2022.
1 code implementation • 15 Sep 2022 • Wenxiu Sun, Qingpeng Zhu, Chongyi Li, Ruicheng Feng, Shangchen Zhou, Jun Jiang, Qingyu Yang, Chen Change Loy, Jinwei Gu
A detailed description of all models developed in this challenge is provided in this paper.
1 code implementation • 15 Sep 2022 • Qingyu Yang, Guang Yang, Jun Jiang, Chongyi Li, Ruicheng Feng, Shangchen Zhou, Wenxiu Sun, Qingpeng Zhu, Chen Change Loy, Jinwei Gu
A detailed description of all models developed in this challenge is provided in this paper.
1 code implementation • 14 Aug 2022 • Chunle Guo, Ruiqi Wu, Xin Jin, Linghao Han, Zhi Chai, Weidong Zhang, Chongyi Li
To achieve that, we also contribute a dataset, URankerSet, containing sufficient results enhanced by different UIE algorithms and the corresponding perceptual rankings, to train our URanker.
no code implementations • 28 Jul 2022 • Chongyi Li, Chunle Guo, Ruicheng Feng, Shangchen Zhou, Chen Change Loy
Our method inherits the zero-reference learning and curve-based framework from an effective low-light image enhancement method, Zero-DCE, with further speed up in its inference speed, reduction in its model size, and extension to controllable exposure adjustment.
1 code implementation • 22 Jun 2022 • Shangchen Zhou, Kelvin C. K. Chan, Chongyi Li, Chen Change Loy
In this paper, we demonstrate that a learned discrete codebook prior in a small proxy space largely reduces the uncertainty and ambiguity of restoration mapping by casting blind face restoration as a code prediction task, while providing rich visual atoms for generating high-quality faces.
Ranked #2 on
Blind Face Restoration
on CelebA-Test
2 code implementations • 19 Apr 2022 • Runmin Cong, Ning Yang, Chongyi Li, Huazhu Fu, Yao Zhao, Qingming Huang, Sam Kwong
In this paper, we propose a global-and-local collaborative learning architecture, which includes a global correspondence modeling (GCM) and a local correspondence modeling (LCM) to capture comprehensive inter-image corresponding relationship among different images from the global and local perspectives.
1 code implementation • 7 Feb 2022 • Shangchen Zhou, Chongyi Li, Chen Change Loy
With the pipeline, we present the first large-scale dataset for joint low-light enhancement and deblurring.
Ranked #2 on
Low-Light Image Enhancement
on Sony-Total-Dark
2 code implementations • CVPR 2022 • Chun-Le Guo, Qixin Yan, Saeed Anwar, Runmin Cong, Wenqi Ren, Chongyi Li
Though Transformer has occupied various computer vision tasks, directly leveraging Transformer for image dehazing is challenging: 1) it tends to result in ambiguous and coarse details that are undesired for image reconstruction; 2) previous position embedding of Transformer is provided in logic or spatial position order that neglects the variational haze densities, which results in the sub-optimal dehazing performance.
no code implementations • 27 Dec 2021 • Hai-Han Sun, Yee Hui Lee, Qiqi Dai, Chongyi Li, Genevieve Ow, Mohamed Lokman Mohd Yusof, Abdulkadir C. Yucel
However, the task of estimating root-related parameters is challenging as the root reflection is a complex function of multiple root parameters and root orientations.
1 code implementation • 2 Aug 2021 • Shi Qiu, Yunfan Wu, Saeed Anwar, Chongyi Li
Object detection in three-dimensional (3D) space attracts much interest from academia and industry since it is an essential task in AI-driven applications such as robotics, autonomous driving, and augmented reality.
7 code implementations • 27 Apr 2021 • Chongyi Li, Saeed Anwar, Junhui Hou, Runmin Cong, Chunle Guo, Wenqi Ren
As a result, our network can effectively improve the visual quality of underwater images by exploiting multiple color spaces embedding and the advantages of both physical model-based and learning-based methods.
Ranked #3 on
Underwater Image Restoration
on LSUI
(using extra training data)
3 code implementations • 21 Apr 2021 • Chongyi Li, Chunle Guo, Linghao Han, Jun Jiang, Ming-Ming Cheng, Jinwei Gu, Chen Change Loy
Low-light image enhancement (LLIE) aims at improving the perception or interpretability of an image captured in an environment with poor illumination.
1 code implementation • CVPR 2021 • Ruicheng Feng, Chongyi Li, Huaijin Chen, Shuai Li, Chen Change Loy, Jinwei Gu
Recent development of Under-Display Camera (UDC) systems provides a true bezel-less and notch-free viewing experience on smartphones (and TV, laptops, tablets), while allowing images to be captured from the selfie camera embedded underneath.
5 code implementations • 1 Mar 2021 • Chongyi Li, Chunle Guo, Chen Change Loy
This paper presents a novel method, Zero-Reference Deep Curve Estimation (Zero-DCE), which formulates light enhancement as a task of image-specific curve estimation with a deep network.
no code implementations • 29 Jan 2021 • Hai-Han Sun, Yee Hui Lee, Chongyi Li, Genevieve Ow, Mohamed Lokman Mohd Yusof, Abdulkadir C. Yucel
The horizontal orientation angle and vertical inclination angle of an elongated subsurface object are key parameters for object identification and imaging in ground penetrating radar (GPR) applications.
3 code implementations • 26 Nov 2020 • Qijian Zhang, Runmin Cong, Chongyi Li, Ming-Ming Cheng, Yuming Fang, Xiaochun Cao, Yao Zhao, Sam Kwong
Despite the remarkable advances in visual saliency analysis for natural scene images (NSIs), salient object detection (SOD) for optical remote sensing images (RSIs) still remains an open and challenging problem.
1 code implementation • NeurIPS 2020 • Qijian Zhang, Runmin Cong, Junhui Hou, Chongyi Li, Yao Zhao
In the first stage, we propose a group-attentional semantic aggregation module that models inter-image relationships to generate the group-wise semantic representations.
no code implementations • 26 Oct 2020 • Chongyi Li, Chunle Guo, Qiming Ai, Shangchen Zhou, Chen Change Loy
This paper presents a new method, called FlexiCurve, for photo enhancement.
no code implementations • 2 Oct 2020 • Chongyi Li, Runmin Cong, Chunle Guo, Hua Li, Chunjie Zhang, Feng Zheng, Yao Zhao
In this paper, we propose a novel Parallel Down-up Fusion network (PDF-Net) for SOD in optical RSIs, which takes full advantage of the in-path low- and high-level features and cross-path multi-resolution features to distinguish diversely scaled salient objects and suppress the cluttered backgrounds.
1 code implementation • 25 Aug 2020 • Saeed Anwar, Muhammad Tahir, Chongyi Li, Ajmal Mian, Fahad Shahbaz Khan, Abdul Wahab Muzaffar
Image colorization estimates RGB colors for grayscale images or video frames to improve their aesthetic and perceptual quality.
no code implementations • 7 Aug 2020 • Chongyi Li, Huazhu Fu, Runmin Cong, Zechao Li, Qianqian Xu
We further demonstrate the advantages of the proposed method for improving the accuracy of retinal vessel segmentation.
1 code implementation • ECCV 2020 • Chongyi Li, Runmin Cong, Yongri Piao, Qianqian Xu, Chen Change Loy
Second, we propose an adaptive feature selection (AFS) module to select saliency-related features and suppress the inferior ones.
Ranked #9 on
RGB-D Salient Object Detection
on NJU2K
13 code implementations • CVPR 2020 • Chunle Guo, Chongyi Li, Jichang Guo, Chen Change Loy, Junhui Hou, Sam Kwong, Runmin Cong
The paper presents a novel method, Zero-Reference Deep Curve Estimation (Zero-DCE), which formulates light enhancement as a task of image-specific curve estimation with a deep network.
Ranked #1 on
Color Constancy
on INTEL-TUT2
no code implementations • ICLR 2020 • Miao Yang and Ke Hu, Chongyi Li, Zhiqiang Wei
By substituting the inception module with the I-A module, the Inception-ResnetV2 network achieves a 10. 7% top1 error rate and a 0% top5 error rate on the subset of ILSVRC-2012, which further illustrates the function of the background attention in the image classifications.
no code implementations • 17 Jul 2019 • Saeed Anwar, Chongyi Li
In this paper, our main aim is two-fold, 1): to provide a comprehensive and in-depth survey of the deep learning-based underwater image enhancement, which covers various perspectives ranging from algorithms to open issues, and 2): to conduct a qualitative and quantitative comparison of the deep algorithms on diverse datasets to serve as a benchmark, which has been barely explored before.
no code implementations • 20 Jun 2019 • Chongyi Li, Runmin Cong, Junhui Hou, Sanyi Zhang, Yue Qian, Sam Kwong
Arising from the various object types and scales, diverse imaging orientations, and cluttered backgrounds in optical remote sensing image (RSI), it is difficult to directly extend the success of salient object detection for nature scene image to the optical RSI.
1 code implementation • 11 Jan 2019 • Chongyi Li, Chunle Guo, Wenqi Ren, Runmin Cong, Junhui Hou, Sam Kwong, DaCheng Tao
In this paper, we construct an Underwater Image Enhancement Benchmark (UIEB) including 950 real-world underwater images, 890 of which have the corresponding reference images.
Ranked #6 on
Underwater Image Restoration
on LSUI
(using extra training data)
2 code implementations • 10 Jul 2018 • Saeed Anwar, Chongyi Li, Fatih Porikli
In an underwater scene, wavelength-dependent light absorption and scattering degrade the visibility of images, causing low contrast and distorted color casts.
no code implementations • 21 Mar 2018 • Chongyi Li, Jichang Guo, Fatih Porikli, Huazhu Fu, Yanwei Pang
Different from previous learning-based methods, we propose a flexible cascaded CNN for single hazy image restoration, which considers the medium transmission and global atmospheric light jointly by two task-driven subnetworks.
no code implementations • 2 Dec 2017 • Chongyi Li, Jichang Guo, Fatih Porikli, Chunle Guo, Huzhu Fu, Xi Li
Despite the recent progress in image dehazing, several problems remain largely unsolved such as robustness for varying scenes, the visual quality of reconstructed images, and effectiveness and flexibility for applications.
no code implementations • 19 Oct 2017 • Chongyi Li, Jichang Guo, Chunle Guo
Underwater vision suffers from severe effects due to selective attenuation and scattering when light propagates through water.