no code implementations • ECCV 2020 • Jiangxin Dong, Jinshan Pan
We propose an effective feature dehazing unit (FDU), which is applied to the deep feature space to explore useful features for image dehazing based on the physics model.
Ranked #23 on
Image Dehazing
on SOTS Indoor
no code implementations • 12 Dec 2024 • Zhongbao Yang, Jiangxin Dong, Jinhui Tang, Jinshan Pan
Furthermore, to restore images realistically and visually-pleasant, we develop a short-exposure guided diffusion model that explores useful features from short-exposure images and blurred regions to better constrain the diffusion process.
no code implementations • 2 Dec 2024 • Hao Li, Xiang Chen, Jiangxin Dong, Jinhui Tang, Jinshan Pan
Despite the significant progress made by all-in-one models in universal image restoration, existing methods suffer from a generalization bottleneck in real-world scenarios, as they are mostly trained on small-scale synthetic datasets with limited degradations.
no code implementations • 27 Nov 2024 • Junyang Chen, Jinshan Pan, Jiangxin Dong
Faithful image super-resolution (SR) not only needs to recover images that appear realistic, similar to image generation tasks, but also requires that the restored images maintain fidelity and structural consistency with the input.
no code implementations • 26 Aug 2024 • Hao Li, Jiangxin Dong, Jinshan Pan
However, the key components in recurrent-based VSR networks significantly impact model efficiency, e. g., the alignment module occupies a substantial portion of model parameters, while the bidirectional propagation mechanism significantly amplifies the inference time.
1 code implementation • 23 May 2024 • Lingshun Kong, Jiangxin Dong, Ming-Hsuan Yang, Jinshan Pan
Convolutional neural networks (CNNs) and Vision Transformers (ViTs) have achieved excellent performance in image restoration.
2 code implementations • 25 Apr 2024 • Marcos V. Conde, Zhijun Lei, Wen Li, Cosmin Stejerean, Ioannis Katsavounidis, Radu Timofte, Kihwan Yoon, Ganzorig Gankhuyag, Jiangtao Lv, Long Sun, Jinshan Pan, Jiangxin Dong, Jinhui Tang, Zhiyuan Li, Hao Wei, Chenyang Ge, Dongyang Zhang, Tianle Liu, Huaian Chen, Yi Jin, Menghan Zhou, Yiqiang Yan, Si Gao, Biao Wu, Shaoli Liu, Chengjian Zheng, Diankai Zhang, Ning Wang, Xintao Qiu, Yuanbo Zhou, Kongxian Wu, Xinwei Dai, Hui Tang, Wei Deng, Qingquan Gao, Tong Tong, Jae-Hyeon Lee, Ui-Jin Choi, Min Yan, Xin Liu, Qian Wang, Xiaoqian Ye, Zhan Du, Tiansen Zhang, Long Peng, Jiaming Guo, Xin Di, Bohao Liao, Zhibo Du, Peize Xia, Renjing Pei, Yang Wang, Yang Cao, ZhengJun Zha, Bingnan Han, Hongyuan Yu, Zhuoyuan Wu, Cheng Wan, Yuqing Liu, Haodong Yu, Jizhe Li, Zhijuan Huang, Yuan Huang, Yajun Zou, Xianyu Guan, Qi Jia, Heng Zhang, Xuanwu Yin, Kunlong Zuo, Hyeon-Cheol Moon, Tae-hyun Jeong, Yoonmo Yang, Jae-Gon Kim, Jinwoo Jeong, Sunjei Kim
This paper introduces a novel benchmark as part of the AIS 2024 Real-Time Image Super-Resolution (RTSR) Challenge, which aims to upscale compressed images from 540p to 4K resolution (4x factor) in real-time on commercial GPUs.
3 code implementations • 16 Apr 2024 • Bin Ren, Nancy Mehta, Radu Timofte, Hongyuan Yu, Cheng Wan, Yuxin Hong, Bingnan Han, Zhuoyuan Wu, Yajun Zou, Yuqing Liu, Jizhe Li, Keji He, Chao Fan, Heng Zhang, Xiaolin Zhang, Xuanwu Yin, Kunlong Zuo, Bohao Liao, Peizhe Xia, Long Peng, Zhibo Du, Xin Di, Wangkai Li, Yang Wang, Wei Zhai, Renjing Pei, Jiaming Guo, Songcen Xu, Yang Cao, ZhengJun Zha, Yan Wang, Yi Liu, Qing Wang, Gang Zhang, Liou Zhang, Shijie Zhao, Long Sun, Jinshan Pan, Jiangxin Dong, Jinhui Tang, Xin Liu, Min Yan, Menghan Zhou, Yiqiang Yan, Yixuan Liu, Wensong Chan, Dehua Tang, Dong Zhou, Li Wang, Lu Tian, Barsoum Emad, Bohan Jia, Junbo Qiao, Yunshuai Zhou, Yun Zhang, Wei Li, Shaohui Lin, Shenglong Zhou, Binbin Chen, Jincheng Liao, Suiyi Zhao, Zhao Zhang, Bo wang, Yan Luo, Yanyan Wei, Feng Li, Mingshen Wang, Yawei Li, Jinhan Guan, Dehua Hu, Jiawei Yu, Qisheng Xu, Tao Sun, Long Lan, Kele Xu, Xin Lin, Jingtong Yue, Lehan Yang, Shiyi Du, Lu Qi, Chao Ren, Zeyu Han, YuHan Wang, Chaolin Chen, Haobo Li, Mingjun Zheng, Zhongbao Yang, Lianhong Song, Xingzhuo Yan, Minghan Fu, Jingyi Zhang, Baiang Li, Qi Zhu, Xiaogang Xu, Dan Guo, Chunle Guo, Jiadi Chen, Huanhuan Long, Chunjiang Duanmu, Xiaoyan Lei, Jie Liu, Weilin Jia, Weifeng Cao, Wenlong Zhang, Yanyu Mao, Ruilong Guo, Nihao Zhang, Qian Wang, Manoj Pandey, Maksym Chernozhukov, Giang Le, Shuli Cheng, Hongyuan Wang, Ziyan Wei, Qingting Tang, Liejun Wang, Yongming Li, Yanhui Guo, Hao Xu, Akram Khatami-Rizi, Ahmad Mahmoudi-Aznaveh, Chih-Chung Hsu, Chia-Ming Lee, Yi-Shiuan Chou, Amogh Joshi, Nikhil Akalwadi, Sampada Malagi, Palani Yashaswini, Chaitra Desai, Ramesh Ashok Tabib, Ujwala Patil, Uma Mudenagudi
In sub-track 1, the practical runtime performance of the submissions was evaluated, and the corresponding score was used to determine the ranking.
1 code implementation • 9 Apr 2024 • Yixin Yang, Jiangxin Dong, Jinhui Tang, Jinshan Pan
To explore this property for better spatial and temporal feature utilization, we develop a local attention module to aggregate the features from adjacent frames in a spatial-temporal neighborhood.
1 code implementation • 6 Apr 2024 • Hao Li, Xiang Chen, Jiangxin Dong, Jinhui Tang, Jinshan Pan
However, inaccurate alignment usually leads to aligned features with significant artifacts, which will be accumulated during propagation and thus affect video restoration.
Ranked #5 on
Video Super-Resolution
on Vid4 - 4x upscaling
1 code implementation • CVPR 2024 • Xiang Chen, Jinshan Pan, Jiangxin Dong
To better explore the common degradation representations from spatially-varying rain streaks, we incorporate intra-scale implicit neural representations based on pixel coordinates with the degraded inputs in a closed-loop design, enabling the learned features to facilitate rain removal and improve the robustness of the model in complex scenarios.
no code implementations • 5 Oct 2023 • Xiang Chen, Jinshan Pan, Jiangxin Dong, Jinhui Tang
In this paper, we provide a comprehensive review of existing image deraining method and provide a unify evaluation setting to evaluate the performance of image deraining methods.
no code implementations • 12 Jun 2023 • Changguang Wu, Jiangxin Dong, Jinhui Tang
To further speed up the inference speed, a lookup table method is employed for fast retrieval.
1 code implementation • 13 Mar 2023 • Cong Wang, Jinshan Pan, WanYu Lin, Jiangxin Dong, Xiao-Ming Wu
For this purpose, we develop a prompt based on the features of depth differences between the hazy input images and corresponding clear counterparts that can guide dehazing models for better restoration.
1 code implementation • ICCV 2023 • Long Sun, Jiangxin Dong, Jinhui Tang, Jinshan Pan
Although numerous solutions have been proposed for image super-resolution, they are usually incompatible with low-power devices with many computational and memory constraints.
Ranked #53 on
Image Super-Resolution
on Set14 - 4x upscaling
no code implementations • ICCV 2023 • Xiang Li, Jinshan Pan, Jinhui Tang, Jiangxin Dong
We develop a hybrid dynamic-Transformer block(HDTB) that integrates the MHDLSA and SparseGSA for both local and global feature exploration.
no code implementations • ICCV 2023 • Jiangxin Dong, Jinshan Pan, Zhongbao Yang, Jinhui Tang
We present a simple and effective Multi-scale Residual Low-Pass Filter Network (MRLPFNet) that jointly explores the image details and main structures for image deblurring.
1 code implementation • CVPR 2023 • Jinshan Pan, Boming Xu, Jiangxin Dong, Jianjun Ge, Jinhui Tang
In contrast to existing methods that directly align adjacent frames without discrimination, we develop a deep discriminative spatial and temporal network to facilitate the spatial and temporal feature exploration for better video deblurring.
1 code implementation • CVPR 2023 • Lingshun Kong, Jiangxin Dong, Mingqiang Li, Jianjun Ge, Jinshan Pan
We present an effective and efficient method that explores the properties of Transformers in the frequency domain for high-quality image deblurring.
Ranked #2 on
Image Deblurring
on GoPro
(using extra training data)
no code implementations • CVPR 2021 • Jiangxin Dong, Stefan Roth, Bernt Schiele
The classical maximum a-posteriori (MAP) framework for non-blind image deblurring requires defining suitable data and regularization terms, whose interplay yields the desired clear image through optimization.
1 code implementation • NeurIPS 2020 • Jiangxin Dong, Stefan Roth, Bernt Schiele
We present a simple and effective approach for non-blind image deblurring, combining classical techniques and deep learning.
no code implementations • ECCV 2018 • Jiangxin Dong, Jinshan Pan, Deqing Sun, Zhixun Su, Ming-Hsuan Yang
We propose a simple and effective discriminative framework to learn data terms that can adaptively handle blurred images in the presence of severe noise and outliers.
no code implementations • 2 Aug 2018 • Jinshan Pan, Jiangxin Dong, Yang Liu, Jiawei Zhang, Jimmy Ren, Jinhui Tang, Yu-Wing Tai, Ming-Hsuan Yang
We present an algorithm to directly solve numerous image restoration problems (e. g., image deblurring, image dehazing, image deraining, etc.).
no code implementations • ICCV 2017 • Jiangxin Dong, Jinshan Pan, Zhixun Su, Ming-Hsuan Yang
We analyze the relationship between the proposed algorithm and other blind deblurring methods with outlier handling and show how to estimate intermediate latent images for blur kernel estimation principally.
no code implementations • ICCV 2017 • Jinshan Pan, Jiangxin Dong, Yu-Wing Tai, Zhixun Su, Ming-Hsuan Yang
Solving blind image deblurring usually requires defining a data fitting function and image priors.