2 code implementations • ECCV 2020 • Xiaotong Luo, Yuan Xie, Yulun Zhang, Yanyun Qu, Cuihua Li, Yun Fu
Drawing lessons from lattice filter bank, we design the lattice block (LB) in which two butterfly structures are applied to combine two RBs.
no code implementations • 20 Mar 2025 • Sidi Yang, Binxiao Huang, Yulun Zhang, Dahai Yu, Yujiu Yang, Ngai Wong
While deep neural networks have revolutionized image denoising capabilities, their deployment on edge devices remains challenging due to substantial computational and memory requirements.
no code implementations • 12 Mar 2025 • Peng Hu, Chunming He, Lei Xu, Jingduo Tian, Sina Farsiu, Yulun Zhang, Pei Liu, Xiu Li
(2) In the codebook lookup stage, we implement a quality-conditioned Transformer-based framework.
1 code implementation • 9 Mar 2025 • Xiaoyang Liu, Yuquan Wang, Zheng Chen, JieZhang Cao, He Zhang, Yulun Zhang, Xiaokang Yang
In this paper, we conduct an in-depth exploration of diffusion models in deblurring and propose a one-step diffusion model for deblurring (OSDD), a novel framework that reduces the denoising process to a single step, significantly improving inference efficiency while maintaining high fidelity.
1 code implementation • 9 Mar 2025 • Junyi Wu, Zhiteng Li, Zheng Hui, Yulun Zhang, Linghe Kong, Xiaokang Yang
Recently, Diffusion Transformers (DiTs) have emerged as a dominant architecture in video generation, surpassing U-Net-based models in terms of performance.
1 code implementation • 7 Mar 2025 • Libo Zhu, Haotong Qin, Kaicheng Yang, Wenbo Li, Yong Guo, Yulun Zhang, Susanto Rahardja, Xiaokang Yang
To explore more possibilities of quantized OSDSR, we propose an efficient method, Quantization via reverse-module and timestep-retraining for OSDSR, named QArtSR.
no code implementations • 3 Mar 2025 • Xiongfei Su, Siyuan Li, Yuning Cui, Miao Cao, Yulun Zhang, Zheng Chen, Zongliang Wu, Zedong Wang, Yuanlong Zhang, Xin Yuan
Image dehazing is a crucial task that involves the enhancement of degraded images to recover their sharpness and textures.
no code implementations • 27 Feb 2025 • Xiongfei Su, Tianyi Zhu, Lina Liu, Zheng Chen, Yulun Zhang, Siyuan Li, Juntian Ye, Feihu Xu, Xin Yuan
The domain of non-line-of-sight (NLOS) imaging is advancing rapidly, offering the capability to reveal occluded scenes that are not directly visible.
1 code implementation • 21 Feb 2025 • Kai Liu, Dehui Wang, Zhiteng Li, Zheng Chen, Yong Guo, Wenbo Li, Linghe Kong, Yulun Zhang
Experimentally, we observe that the degradation of quantization is mainly attributed to the quantization of activation instead of model weights.
1 code implementation • 14 Feb 2025 • Jinpei Guo, Zheng Chen, Wenbo Li, Yong Guo, Yulun Zhang
The core of CODiff is the compression-aware visual embedder (CaVE), which extracts and leverages JPEG compression priors to guide the diffusion model.
1 code implementation • 4 Feb 2025 • Jianze Li, JieZhang Cao, Yong Guo, Wenbo Li, Yulun Zhang
We use the state-of-the-art diffusion model FLUX. 1-dev as both the teacher model and the base model.
1 code implementation • 3 Feb 2025 • Xianglong Yan, Tianao Zhang, Zhiteng Li, Yulun Zhang
To address this issue, we propose a Progressive Binarization with Semi-Structured Pruning (PBS$^2$P) method for LLM compression.
1 code implementation • 1 Feb 2025 • Kai Liu, Kaicheng Yang, Zheng Chen, Zhiteng Li, Yong Guo, Wenbo Li, Linghe Kong, Yulun Zhang
One is to compress the DM into 1-bit, aka binarization, alleviating the storage and computation pressure.
no code implementations • 30 Jan 2025 • Chunming He, Rihan Zhang, Fengyang Xiao, Chenyu Fang, Longxiang Tang, Yulun Zhang, Linghe Kong, Deng-Ping Fan, Kai Li, Sina Farsiu
To address this, we propose the Reversible Unfolding Network (RUN), which applies reversible strategies across both mask and RGB domains through a theoretically grounded framework, enabling accurate segmentation.
1 code implementation • 2 Jan 2025 • Mengjie Qin, Yuchao Feng, Zongliang Wu, Yulun Zhang, Xin Yuan
To address this issue, we propose a Mamba-inspired Joint Unfolding Network (MiJUN), which integrates physics-embedded DUNs with learning-based HSI imaging.
no code implementations • 30 Dec 2024 • Han Zhou, Wei Dong, Xiaohong Liu, Yulun Zhang, Guangtao Zhai, Jun Chen
Although significant progress has been made in enhancing visibility, retrieving texture details, and mitigating noise in Low-Light (LL) images, the challenge persists in applying current Low-Light Image Enhancement (LLIE) methods to real-world scenarios, primarily due to the diverse illumination conditions encountered.
1 code implementation • 8 Dec 2024 • Xingyu Zheng, Xianglong Liu, Yichen Bian, Xudong Ma, Yulun Zhang, Jiakai Wang, Jinyang Guo, Haotong Qin
Diffusion models (DMs) have been significantly developed and widely used in various applications due to their excellent generative qualities.
no code implementations • 27 Nov 2024 • Eduard Zamfir, Zongwei Wu, Nancy Mehta, Yuedong Tan, Danda Pani Paudel, Yulun Zhang, Radu Timofte
To address this, we introduce ``complexity experts" -- flexible expert blocks with varying computational complexity and receptive fields.
Ranked #2 on
Blind All-in-One Image Restoration
on 5-Degradations
1 code implementation • 26 Nov 2024 • Jingkai Wang, Jue Gong, Lin Zhang, Zheng Chen, Xing Liu, Hong Gu, Yutong Liu, Yulun Zhang, Xiaokang Yang
Moreover, existing methods often struggle to generate face images that are harmonious, realistic, and consistent with the subject's identity.
1 code implementation • 26 Nov 2024 • Zheng Chen, Xun Zhang, Wenbo Li, Renjing Pei, Fenglong Song, Xiongkuo Min, Xiaohong Liu, Xin Yuan, Yong Guo, Yulun Zhang
Experiments demonstrate that our proposed task paradigm, dataset, and benchmark facilitate the more fine-grained IQA application.
1 code implementation • 26 Nov 2024 • Libo Zhu, Jianze Li, Haotong Qin, Wenbo Li, Yulun Zhang, Yong Guo, Xiaokang Yang
Diffusion-based image super-resolution (SR) models have shown superior performance at the cost of multiple denoising steps.
1 code implementation • 22 Nov 2024 • Hang Guo, Yong Guo, Yaohua Zha, Yulun Zhang, Wenbo Li, Tao Dai, Shu-Tao Xia, Yawei Li
The Mamba-based image restoration backbones have recently demonstrated significant potential in balancing global reception and computational efficiency.
no code implementations • 21 Nov 2024 • Yuanhao Cai, He Zhang, Kai Zhang, Yixun Liang, Mengwei Ren, Fujun Luan, Qing Liu, Soo Ye Kim, Jianming Zhang, Zhifei Zhang, Yuqian Zhou, Yulun Zhang, Xiaokang Yang, Zhe Lin, Alan Yuille
Existing feedforward image-to-3D methods mainly rely on 2D multi-view diffusion models that cannot guarantee 3D consistency.
1 code implementation • 15 Nov 2024 • Rui Yin, Haotong Qin, Yulun Zhang, Wenbo Li, Yong Guo, Jianjun Zhu, Cheng Wang, Biao Jia
BiDense incorporates two key techniques: the Distribution-adaptive Binarizer (DAB) and the Channel-adaptive Full-precision Bypass (CFB).
1 code implementation • 28 Oct 2024 • Wei Dong, Han Zhou, Yulun Zhang, Xiaohong Liu, Jun Chen
Inspired by Mamba which demonstrates powerful and highly efficient sequence modeling, we introduce a novel framework based on Mamba for Exposure Correction (ECMamba) with dual pathways, each dedicated to the restoration of reflectance and illumination map, respectively.
no code implementations • 14 Oct 2024 • Junbo Qiao, Jincheng Liao, Wei Li, Yulun Zhang, Yong Guo, Yi Wen, Zhangxizi Qiu, Jiao Xie, Jie Hu, Shaohui Lin
State Space Models (SSM), such as Mamba, have shown strong representation ability in modeling long-range dependency with linear complexity, achieving successful applications from high-level to low-level vision tasks.
1 code implementation • 5 Oct 2024 • Jianze Li, JieZhang Cao, Zichen Zou, Xiongfei Su, Xin Yuan, Yulun Zhang, Yong Guo, Xiaokang Yang
However, these methods incur substantial training costs and may constrain the performance of the student model by the teacher's limitations.
no code implementations • 5 Oct 2024 • Yong Guo, Shulian Zhang, Haolin Pan, Jing Liu, Yulun Zhang, Jian Chen
To address this, we propose a Gap Preserving Distillation (GPD) method that trains an additional dynamic teacher model from scratch along with training the student to bridge this gap.
no code implementations • 5 Oct 2024 • Keda Tao, Jinjin Gu, Yulun Zhang, Xiucheng Wang, Nan Cheng
We introduce a novel Multi-modal Guided Real-World Face Restoration (MGFR) technique designed to improve the quality of facial image restoration from low-quality inputs.
1 code implementation • 4 Oct 2024 • Zhiteng Li, Xianglong Yan, Tianao Zhang, Haotong Qin, Dong Xie, Jiang Tian, Zhongchao shi, Linghe Kong, Yulun Zhang, Xiaokang Yang
However, current binarization methods struggle to narrow the distribution gap between binarized and full-precision weights, while also overlooking the column deviation in LLM weight distribution.
1 code implementation • 3 Oct 2024 • Kai Liu, Ziqing Zhang, Wenbo Li, Renjing Pei, Fenglong Song, Xiaohong Liu, Linghe Kong, Yulun Zhang
Image quality assessment (IQA) serves as the golden standard for all models' performance in nearly all computer vision fields.
1 code implementation • 16 Jul 2024 • Ping Wang, Yulun Zhang, Lishun Wang, Xin Yuan
Transformers have achieved the state-of-the-art performance on solving the inverse problem of Snapshot Compressive Imaging (SCI) for video, whose ill-posedness is rooted in the mixed degradation of spatial masking and temporal aliasing.
1 code implementation • 11 Jul 2024 • Rui Yin, Yulun Zhang, Zherong Pan, Jianjun Zhu, Cheng Wang, Biao Jia
Two-view pose estimation is essential for map-free visual relocalization and object pose tracking tasks.
no code implementations • 8 Jul 2024 • Xiang Zhang, Yulun Zhang, Fisher Yu
Transformers have exhibited promising performance in computer vision tasks including image super-resolution (SR).
Ranked #36 on
Image Super-Resolution
on Set14 - 4x upscaling
1 code implementation • 17 Jun 2024 • Chunming He, Yuqi Shen, Chengyu Fang, Fengyang Xiao, Longxiang Tang, Yulun Zhang, WangMeng Zuo, Zhenhua Guo, Xiu Li
Deep generative models have garnered significant attention in low-level vision tasks due to their generative capabilities.
1 code implementation • 12 Jun 2024 • Chengyu Fang, Chunming He, Fengyang Xiao, Yulun Zhang, Longxiang Tang, Yuelin Zhang, Kai Li, Xiu Li
Real-world Image Dehazing (RID) aims to alleviate haze-induced degradation in real-world settings.
1 code implementation • 10 Jun 2024 • Kai Liu, Haotong Qin, Yong Guo, Xin Yuan, Linghe Kong, Guihai Chen, Yulun Zhang
Low-bit quantization has become widespread for compressing image super-resolution (SR) models for edge deployment, which allows advanced SR models to enjoy compact low-bit parameters and efficient integer/bitwise constructions for storage compression and inference acceleration, respectively.
1 code implementation • 9 Jun 2024 • Zheng Chen, Haotong Qin, Yong Guo, Xiongfei Su, Xin Yuan, Linghe Kong, Yulun Zhang
Nonetheless, due to the model structure and the multi-step iterative attribute of DMs, existing binarization methods result in significant performance degradation.
1 code implementation • 1 Jun 2024 • Jiahua Dong, Hui Yin, Hongliu Li, Wenbo Li, Yulun Zhang, Salman Khan, Fahad Shahbaz Khan
Experiments verify the benefits of our DHM for HSI reconstruction.
2 code implementations • 24 May 2024 • Yuanhao Cai, Zihao Xiao, Yixun Liang, Minghan Qin, Yulun Zhang, Xiaokang Yang, Yaoyao Liu, Alan Yuille
In this paper, we propose a new framework, High Dynamic Range Gaussian Splatting (HDR-GS), which can efficiently render novel HDR views and reconstruct LDR images with a user input exposure time.
Ranked #1 on
Novel View Synthesis
on HDR-GS
no code implementations • 24 May 2024 • Eduard Zamfir, Zongwei Wu, Nancy Mehta, Danda Pani Paudel, Yulun Zhang, Radu Timofte
Reconstructing missing details from degraded low-quality inputs poses a significant challenge.
Ranked #4 on
5-Degradation Blind All-in-One Image Restoration
on 5-Degradation Blind All-in-One Image Restoration
5-Degradation Blind All-in-One Image Restoration
Blind All-in-One Image Restoration
+1
1 code implementation • 24 Apr 2024 • He Jiang, Yulun Zhang, Rishi Veerapaneni, Jiaoyang Li
We present future directions such as developing more competitive rule-based and anytime MAPF algorithms and parallelizing state-of-the-art MAPF algorithms.
3 code implementations • 22 Apr 2024 • Xiaoning Liu, Zongwei Wu, Ao Li, Florin-Alexandru Vasluianu, Yulun Zhang, Shuhang Gu, Le Zhang, Ce Zhu, Radu Timofte, Zhi Jin, Hongjun Wu, Chenxi Wang, Haitao Ling, Yuanhao Cai, Hao Bian, Yuxin Zheng, Jing Lin, Alan Yuille, Ben Shao, Jin Guo, Tianli Liu, Mohao Wu, Yixu Feng, Shuo Hou, Haotian Lin, Yu Zhu, Peng Wu, Wei Dong, Jinqiu Sun, Yanning Zhang, Qingsen Yan, Wenbin Zou, Weipeng Yang, Yunxiang Li, Qiaomu Wei, Tian Ye, Sixiang Chen, Zhao Zhang, Suiyi Zhao, Bo wang, Yan Luo, Zhichao Zuo, Mingshen Wang, Junhu Wang, Yanyan Wei, Xiaopeng Sun, Yu Gao, Jiancheng Huang, Hongming Chen, Xiang Chen, Hui Tang, Yuanbin Chen, Yuanbo Zhou, Xinwei Dai, Xintao Qiu, Wei Deng, Qinquan Gao, Tong Tong, Mingjia Li, Jin Hu, Xinyu He, Xiaojie Guo, sabarinathan, K Uma, A Sasithradevi, B Sathya Bama, S. Mohamed Mansoor Roomi, V. Srivatsav, Jinjuan Wang, Long Sun, Qiuying Chen, Jiahong Shao, Yizhi Zhang, Marcos V. Conde, Daniel Feijoo, Juan C. Benito, Alvaro García, Jaeho Lee, Seongwan Kim, Sharif S M A, Nodirkhuja Khujaev, Roman Tsoy, Ali Murtaza, Uswah Khairuddin, Ahmad 'Athif Mohd Faudzi, Sampada Malagi, Amogh Joshi, Nikhil Akalwadi, Chaitra Desai, Ramesh Ashok Tabib, Uma Mudenagudi, Wenyi Lian, Wenjing Lian, Jagadeesh Kalyanshetti, Vijayalaxmi Ashok Aralikatti, Palani Yashaswini, Nitish Upasi, Dikshit Hegde, Ujwala Patil, Sujata C, Xingzhuo Yan, Wei Hao, Minghan Fu, Pooja Choksy, Anjali Sarvaiya, Kishor Upla, Kiran Raja, Hailong Yan, Yunkai Zhang, Baiang Li, Jingyi Zhang, Huan Zheng
This paper reviews the NTIRE 2024 low light image enhancement challenge, highlighting the proposed solutions and results.
1 code implementation • 15 Apr 2024 • Zheng Chen, Zongwei Wu, Eduard Zamfir, Kai Zhang, Yulun Zhang, Radu Timofte, Xiaokang Yang, Hongyuan Yu, Cheng Wan, Yuxin Hong, Zhijuan Huang, Yajun Zou, Yuan Huang, Jiamin Lin, Bingnan Han, Xianyu Guan, Yongsheng Yu, Daoan Zhang, Xuanwu Yin, Kunlong Zuo, Jinhua Hao, Kai Zhao, Kun Yuan, Ming Sun, Chao Zhou, Hongyu An, Xinfeng Zhang, Zhiyuan Song, Ziyue Dong, Qing Zhao, Xiaogang Xu, Pengxu Wei, Zhi-chao Dou, Gui-ling Wang, Chih-Chung Hsu, Chia-Ming Lee, Yi-Shiuan Chou, Cansu Korkmaz, A. Murat Tekalp, Yubin Wei, Xiaole Yan, Binren Li, Haonan Chen, Siqi Zhang, Sihan Chen, Amogh Joshi, Nikhil Akalwadi, Sampada Malagi, Palani Yashaswini, Chaitra Desai, Ramesh Ashok Tabib, Ujwala Patil, Uma Mudenagudi, Anjali Sarvaiya, Pooja Choksy, Jagrit Joshi, Shubh Kawa, Kishor Upla, Sushrut Patwardhan, Raghavendra Ramachandra, Sadat Hossain, Geongi Park, S. M. Nadim Uddin, Hao Xu, Yanhui Guo, Aman Urumbekov, Xingzhuo Yan, Wei Hao, Minghan Fu, Isaac Orais, Samuel Smith, Ying Liu, Wangwang Jia, Qisheng Xu, Kele Xu, Weijun Yuan, Zhan Li, Wenqin Kuang, Ruijin Guan, Ruting Deng, Zhao Zhang, Bo wang, Suiyi Zhao, Yan Luo, Yanyan Wei, Asif Hussain Khan, Christian Micheloni, Niki Martinel
This paper reviews the NTIRE 2024 challenge on image super-resolution ($\times$4), highlighting the solutions proposed and the outcomes obtained.
1 code implementation • CVPR 2024 • Gengchen Zhang, Yulun Zhang, Xin Yuan, Ying Fu
For the second issue, we present a distribution-aware binary convolution, which captures the distribution characteristics of real-valued input and incorporates them into plain binary convolutions to alleviate the degradation in performance.
no code implementations • 15 Mar 2024 • Xiaoning Liu, Ao Li, Zongwei Wu, Yapeng Du, Le Zhang, Yulun Zhang, Radu Timofte, Ce Zhu
Leveraging Transformer attention has led to great advancements in HDR deghosting.
1 code implementation • CVPR 2024 • Yijun Yang, Hongtao Wu, Angelica I. Aviles-Rivero, Yulun Zhang, Jing Qin, Lei Zhu
Although ViWS-Net is proposed to remove adverse weather conditions in videos with a single set of pre-trained weights, it is seriously blinded by seen weather at train-time and degenerates when coming to unseen weather during test-time.
2 code implementations • 7 Mar 2024 • Yuanhao Cai, Yixun Liang, Jiahao Wang, Angtian Wang, Yulun Zhang, Xiaokang Yang, Zongwei Zhou, Alan Yuille
X-ray is widely applied for transmission imaging due to its stronger penetration than natural light.
2 code implementations • 5 Feb 2024 • Eduard Zamfir, Zongwei Wu, Nancy Mehta, Yulun Zhang, Radu Timofte
Subsequently, the model delves into the subtleties of rank choice by leveraging a mixture of low-rank experts.
4 code implementations • 3 Feb 2024 • Zixiang Zhao, Lilun Deng, Haowen Bai, Yukun Cui, Zhipeng Zhang, Yulun Zhang, Haotong Qin, Dongdong Chen, Jiangshe Zhang, Peng Wang, Luc van Gool
Therefore, we introduce a novel fusion paradigm named image Fusion via vIsion-Language Model (FILM), for the first time, utilizing explicit textual information from source images to guide the fusion process.
2 code implementations • 2 Feb 2024 • Yulun Zhang, He Jiang, Varun Bhatt, Stefanos Nikolaidis, Jiaoyang Li
In this work, we introduce the guidance graph as a versatile representation of guidance for lifelong MAPF, framing Guidance Graph Optimization as the task of optimizing its edge weights.
1 code implementation • 30 Jan 2024 • Paul Friedrich, Yulun Zhang, Michael Curry, Ludwig Dierks, Stephen Mcaleer, Jiaoyang Li, Tuomas Sandholm, Sven Seuken
Multi-Agent Path Finding (MAPF) involves determining paths for multiple agents to travel simultaneously and collision-free through a shared area toward given goal locations.
1 code implementation • 9 Dec 2023 • Junzhe Lu, Jing Lin, Hongkun Dou, Ailing Zeng, Yue Deng, Yulun Zhang, Haoqian Wang
Our approach demonstrates considerable enhancements over common uniform scheduling used in image domains, boasting improvements of 5. 4%, 17. 2%, and 3. 8% across human mesh recovery, pose completion, and motion denoising, respectively.
1 code implementation • CVPR 2024 • Haofei Xu, Anpei Chen, Yuedong Chen, Christos Sakaridis, Yulun Zhang, Marc Pollefeys, Andreas Geiger, Fisher Yu
We present Multi-Baseline Radiance Fields (MuRF), a general feed-forward approach to solving sparse view synthesis under multiple different baseline settings (small and large baselines, and different number of input views).
1 code implementation • 24 Nov 2023 • Zhiteng Li, Yulun Zhang, Jing Lin, Haotong Qin, Jinjin Gu, Xin Yuan, Linghe Kong, Xiaokang Yang
In this work, we propose BinaryHPE, a novel binarization method designed to estimate the 3D human body, face, and hands parameters efficiently.
1 code implementation • 24 Nov 2023 • Zheng Chen, Yulun Zhang, Jinjin Gu, Xin Yuan, Linghe Kong, Guihai Chen, Xiaokang Yang
Specifically, we first design a text-image generation pipeline to integrate text into the SR dataset through the text degradation representation and degradation model.
1 code implementation • 20 Nov 2023 • Chunming He, Chengyu Fang, Yulun Zhang, Tian Ye, Kai Li, Longxiang Tang, Zhenhua Guo, Xiu Li, Sina Farsiu
These priors are subsequently utilized by RGformer to guide the decomposition of image features into their respective reflectance and illumination components.
1 code implementation • CVPR 2024 • JieZhang Cao, Yue Shi, Kai Zhang, Yulun Zhang, Radu Timofte, Luc van Gool
Due to the inherent property of diffusion models, most existing methods need long serial sampling chains to restore HQ images step-by-step, resulting in expensive sampling time and high computation costs.
1 code implementation • NeurIPS 2023 • Yulun Zhang, Matthew C. Fontaine, Varun Bhatt, Stefanos Nikolaidis, Jiaoyang Li
We show that NCA environment generators maintain consistent, regularized patterns regardless of environment size, significantly enhancing the scalability of multi-robot systems in two different domains with up to 2, 350 robots.
no code implementations • 5 Sep 2023 • Wei Huang, Haotong Qin, Yangdong Liu, Jingzhuo Liang, Yulun Zhang, Ying Li, Xianglong Liu
This leads to a non-negligible gap between the estimated efficiency metrics and the actual hardware that makes quantized models far away from the optimal accuracy and efficiency, and also causes the quantization process to rely on additional high-performance devices.
1 code implementation • 31 Aug 2023 • Shuang Xu, Yifan Wang, Zixiang Zhao, Jiangjun Peng, Xiangyong Cao, Deyu Meng, Yulun Zhang, Radu Timofte, Luc van Gool
NGR is applicable to various image types and different image processing tasks, functioning in a zero-shot learning fashion, making it a versatile and plug-and-play regularizer.
no code implementations • 26 Aug 2023 • Bin Xia, Yulun Zhang, Shiyin Wang, Yitong Wang, Xinglong Wu, Yapeng Tian, Wenming Yang, Radu Timotfe, Luc van Gool
Compared to traditional DMs, the compact IPR enables DiffI2I to obtain more accurate outcomes and employ a lighter denoising network and fewer iterations.
1 code implementation • ICCV 2023 • Miaoyu Li, Ying Fu, Ji Liu, Yulun Zhang
3) stage interaction ignoring the differences in features at different stages.
1 code implementation • 14 Aug 2023 • Hao Shen, Zhong-Qiu Zhao, Yulun Zhang, Zhao Zhang
Multi-stage architectures have exhibited efficacy in image dehazing, which usually decomposes a challenging task into multiple more tractable sub-tasks and progressively estimates latent hazy-free images.
no code implementations • 7 Aug 2023 • Zichun Wang, Yulun Zhang, Debing Zhang, Ying Fu
However, under their blind spot constraints, previous self-supervised video denoising methods suffer from significant information loss and texture destruction in either the whole reference frame or neighbor frames, due to their inadequate consideration of the receptive field.
1 code implementation • ICCV 2023 • Zheng Chen, Yulun Zhang, Jinjin Gu, Linghe Kong, Xiaokang Yang, Fisher Yu
Based on the above idea, we propose a novel Transformer model, Dual Aggregation Transformer (DAT), for image SR. Our DAT aggregates features across spatial and channel dimensions, in the inter-block and intra-block dual manner.
Ranked #10 on
Image Super-Resolution
on Manga109 - 4x upscaling
1 code implementation • 6 Aug 2023 • Chunming He, Kai Li, Yachao Zhang, Yulun Zhang, Zhenhua Guo, Xiu Li, Martin Danelljan, Fisher Yu
On the prey side, we propose an adversarial training framework, Camouflageator, which introduces an auxiliary generator to generate more camouflaged objects that are harder for a COD method to detect.
no code implementations • 3 Aug 2023 • Longxiang Tang, Kai Li, Chunming He, Yulun Zhang, Xiu Li
In this paper, we propose a consistency regularization framework to develop a more generalizable SFDA method, which simultaneously boosts model performance on both target training and testing datasets.
no code implementations • 19 Jul 2023 • Xiaohong Liu, Xiongkuo Min, Wei Sun, Yulun Zhang, Kai Zhang, Radu Timofte, Guangtao Zhai, Yixuan Gao, Yuqin Cao, Tengchuan Kou, Yunlong Dong, Ziheng Jia, Yilin Li, Wei Wu, Shuming Hu, Sibin Deng, Pengxiang Xiao, Ying Chen, Kai Li, Kai Zhao, Kun Yuan, Ming Sun, Heng Cong, Hao Wang, Lingzhi Fu, Yusheng Zhang, Rongyu Zhang, Hang Shi, Qihang Xu, Longan Xiao, Zhiliang Ma, Mirko Agarla, Luigi Celona, Claudio Rota, Raimondo Schettini, Zhiwei Huang, Yanan Li, Xiaotao Wang, Lei Lei, Hongye Liu, Wei Hong, Ironhead Chuang, Allen Lin, Drake Guan, Iris Chen, Kae Lou, Willy Huang, Yachun Tasi, Yvonne Kao, Haotian Fan, Fangyuan Kong, Shiqi Zhou, Hao liu, Yu Lai, Shanshan Chen, Wenqi Wang, HaoNing Wu, Chaofeng Chen, Chunzheng Zhu, Zekun Guo, Shiling Zhao, Haibing Yin, Hongkui Wang, Hanene Brachemi Meftah, Sid Ahmed Fezza, Wassim Hamidouche, Olivier Déforges, Tengfei Shi, Azadeh Mansouri, Hossein Motamednia, Amir Hossein Bakhtiari, Ahmad Mahmoudi Aznaveh
61 participating teams submitted their prediction results during the development phase, with a total of 3168 submissions.
no code implementations • 15 Jul 2023 • Chunming He, Kai Li, Guoxia Xu, Jiangpeng Yan, Longxiang Tang, Yulun Zhang, Xiu Li, YaoWei Wang
Specifically, we extract features from an HQ image and explicitly insert the features, which are expected to encode HQ cues, into the enhancement network to guide the LQ enhancement with the variational normalization module.
1 code implementation • 14 Jul 2023 • Longxiang Tang, Kai Li, Chunming He, Yulun Zhang, Xiu Li
This paper aims to address these two issues by proposing the Class-Balanced Mean Teacher (CBMT) model.
no code implementations • 1 Jun 2023 • Jiamian Wang, Zongliang Wu, Yulun Zhang, Xin Yuan, Tao Lin, Zhiqiang Tao
In this work, we tackle this challenge by marrying prompt tuning with FL to snapshot compressive imaging for the first time and propose an federated hardware-prompt learning (FedHP) method.
no code implementations • 29 May 2023 • Ruofan Zhang, Jinjin Gu, Haoyu Chen, Chao Dong, Yulun Zhang, Wenming Yang
In this work, we introduce a novel approach to craft training degradation distributions using a small set of reference images.
no code implementations • ICCV 2023 • Steven Tel, Zongwei Wu, Yulun Zhang, Barthélémy Heyrman, Cédric Demonceaux, Radu Timofte, Dominique Ginhac
The spatial attention aims to deal with the intra-image correlation to model the dynamic motion, while the channel attention enables the inter-image intertwining to enhance the semantic consistency across frames.
no code implementations • 24 May 2023 • Chang Liu, Henghui Ding, Yulun Zhang, Xudong Jiang
However, the generic attention mechanism in Transformer only uses the language input for attention weight calculation, which does not explicitly fuse language features in its output.
1 code implementation • NeurIPS 2023 • Zheng Chen, Yulun Zhang, Ding Liu, Bin Xia, Jinjin Gu, Linghe Kong, Xin Yuan
Specifically, we perform the DM in a highly compacted latent space to generate the prior feature for the deblurring process.
4 code implementations • CVPR 2024 • Zixiang Zhao, Haowen Bai, Jiangshe Zhang, Yulun Zhang, Kai Zhang, Shuang Xu, Dongdong Chen, Radu Timofte, Luc van Gool
These components enable the net training to follow the principles of the natural sensing-imaging process while satisfying the equivariant imaging prior.
no code implementations • NeurIPS 2023 • Chunming He, Kai Li, Yachao Zhang, Guoxia Xu, Longxiang Tang, Yulun Zhang, Zhenhua Guo, Xiu Li
It remains a challenging task since (1) it is hard to distinguish concealed objects from the background due to the intrinsic similarity and (2) the sparsely-annotated training data only provide weak supervision for model learning.
2 code implementations • NeurIPS 2023 • Yuanhao Cai, Yuxin Zheng, Jing Lin, Xin Yuan, Yulun Zhang, Haoqian Wang
Finally, our BiSRNet is derived by using the proposed techniques to binarize the base model.
1 code implementation • 10 May 2023 • Yulun Zhang, Matthew C. Fontaine, Varun Bhatt, Stefanos Nikolaidis, Jiaoyang Li
We show that, even with state-of-the-art MAPF algorithms, commonly used human-designed layouts can lead to congestion for warehouses with large numbers of robots and thus have limited scalability.
1 code implementation • CVPR 2023 • Miaoyu Li, Ji Liu, Ying Fu, Yulun Zhang, Dejing Dou
In this paper, we address these issues by proposing a spectral enhanced rectangle Transformer, driving it to explore the non-local spatial similarity and global spectral low-rank property of HSIs.
1 code implementation • CVPR 2023 • Zichun Wang, Ying Fu, Ji Liu, Yulun Zhang
Despite the significant results on synthetic noise under simplified assumptions, most self-supervised denoising methods fail under real noise due to the strong spatial noise correlation, including the advanced self-supervised blind-spot networks (BSNs).
1 code implementation • ICCV 2023 • Bin Xia, Yulun Zhang, Shiyin Wang, Yitong Wang, Xinglong Wu, Yapeng Tian, Wenming Yang, Luc van Gool
Diffusion model (DM) has achieved SOTA performance by modeling the image synthesis process into a sequential application of a denoising network.
2 code implementations • ICCV 2023 • Jiamian Wang, Huan Wang, Yulun Zhang, Yun Fu, Zhiqiang Tao
Second, existing pruning methods generally operate upon a pre-trained network for the sparse structure determination, hard to get rid of dense model training in the traditional SR paradigm.
no code implementations • ICCV 2023 • Zixiang Zhao, Jiangshe Zhang, Xiang Gu, Chengli Tan, Shuang Xu, Yulun Zhang, Radu Timofte, Luc van Gool
Then, the extracted features are mapped to the spherical space to complete the separation of private features and the alignment of shared features.
4 code implementations • ICCV 2023 • Zixiang Zhao, Haowen Bai, Yuanzhi Zhu, Jiangshe Zhang, Shuang Xu, Yulun Zhang, Kai Zhang, Deyu Meng, Radu Timofte, Luc van Gool
To leverage strong generative priors and address challenges such as unstable training and lack of interpretability for GAN-based generative methods, we propose a novel fusion algorithm based on the denoising diffusion probabilistic model (DDPM).
5 code implementations • ICCV 2023 • Yuanhao Cai, Hao Bian, Jing Lin, Haoqian Wang, Radu Timofte, Yulun Zhang
When enhancing low-light images, many deep learning algorithms are based on the Retinex theory.
Ranked #1 on
Low-Light Image Enhancement
on SMID
Low-light Image Deblurring and Enhancement
Low-Light Image Enhancement
+3
1 code implementation • 11 Mar 2023 • Zheng Chen, Yulun Zhang, Jinjin Gu, Linghe Kong, Xiaokang Yang
In this work, we propose the Recursive Generalization Transformer (RGT) for image SR, which can capture global spatial information and is suitable for high-resolution images.
Ranked #9 on
Image Super-Resolution
on Manga109 - 4x upscaling
1 code implementation • 11 Mar 2023 • Jiale Zhang, Yulun Zhang, Jinjin Gu, Jiahua Dong, Linghe Kong, Xiaokang Yang
The channel-wise Transformer block performs direct global context interactions across tokens defined by channel dimension.
1 code implementation • 1 Mar 2023 • Bryon Tjanaka, Matthew C. Fontaine, David H. Lee, Yulun Zhang, Nivedit Reddy Balam, Nathaniel Dennler, Sujay S. Garlanka, Nikitas Dimitri Klapsis, Stefanos Nikolaidis
Recent years have seen a rise in the popularity of quality diversity (QD) optimization, a branch of optimization that seeks to find a collection of diverse, high-performing solutions to a given problem.
2 code implementations • 2 Feb 2023 • Jiahua Dong, Hongliu Li, Yang Cong, Gan Sun, Yulun Zhang, Luc van Gool
These issues render global model to undergo catastrophic forgetting on old categories, when local clients receive new categories consecutively under limited memory of storing old categories.
no code implementations • CVPR 2023 • Chunming He, Kai Li, Yachao Zhang, Longxiang Tang, Yulun Zhang, Zhenhua Guo, Xiu Li
COD is a challenging task due to the intrinsic similarity of camouflaged objects with the background, as well as their ambiguous boundaries.
no code implementations • ICCV 2023 • Chunming He, Kai Li, Guoxia Xu, Yulun Zhang, Runze Hu, Zhenhua Guo, Xiu Li
Heterogeneous image fusion (HIF) techniques aim to enhance image quality by merging complementary information from images captured by different sensors.
1 code implementation • CVPR 2023 • JieZhang Cao, Qin Wang, Yongqin Xian, Yawei Li, Bingbing Ni, Zhiming Pi, Kai Zhang, Yulun Zhang, Radu Timofte, Luc van Gool
We explicitly design an implicit attention network to learn the ensemble weights for the nearby local features.
1 code implementation • 30 Nov 2022 • Bin Xia, Yulun Zhang, Yitong Wang, Yapeng Tian, Wenming Yang, Radu Timofte, Luc van Gool
It consists of a knowledge distillation based implicit degradation estimator network (KD-IDE) and an efficient SR network.
3 code implementations • CVPR 2023 • Zixiang Zhao, Haowen Bai, Jiangshe Zhang, Yulun Zhang, Shuang Xu, Zudi Lin, Radu Timofte, Luc van Gool
We then introduce a dual-branch Transformer-CNN feature extractor with Lite Transformer (LT) blocks leveraging long-range attention to handle low-frequency global features and Invertible Neural Networks (INN) blocks focusing on extracting high-frequency local information.
3 code implementations • 25 Nov 2022 • Miaoyu Li, Ying Fu, Yulun Zhang
Hyperspectral image (HSI) denoising is a crucial preprocessing procedure for the subsequent HSI applications.
3 code implementations • 24 Nov 2022 • Zheng Chen, Yulun Zhang, Jinjin Gu, Yongbing Zhang, Linghe Kong, Xin Yuan
The core of our CAT is the Rectangle-Window Self-Attention (Rwin-SA), which utilizes horizontal and vertical rectangle window attention in different heads parallelly to expand the attention area and aggregate the features cross different windows.
no code implementations • 9 Oct 2022 • Jinjin Gu, Haoming Cai, Chenyu Dong, Ruofan Zhang, Yulun Zhang, Wenming Yang, Chun Yuan
We finally use a guided fusion operation to integrate the sharp edges generated by the network and flat areas by the interpolation method to get the final SR image.
1 code implementation • 4 Oct 2022 • Jiale Zhang, Yulun Zhang, Jinjin Gu, Yongbing Zhang, Linghe Kong, Xin Yuan
This is considered as a dense attention strategy since the interactions of tokens are restrained in dense regions.
2 code implementations • 2 Oct 2022 • Bin Xia, Yulun Zhang, Yitong Wang, Yapeng Tian, Wenming Yang, Radu Timofte, Luc van Gool
In this study, we reconsider components in binary convolution, such as residual connection, BatchNorm, activation function, and structure, for IR tasks.
1 code implementation • 24 Sep 2022 • Jiamian Wang, Kunpeng Li, Yulun Zhang, Xin Yuan, Zhiqiang Tao
Besides, CASSI entangles the spatial and spectral information into a 2D measurement, placing a barrier for information disentanglement and modeling.
no code implementations • 25 Aug 2022 • JieZhang Cao, Qin Wang, Jingyun Liang, Yulun Zhang, Kai Zhang, Radu Timofte, Luc van Gool
To this end, we propose a new multi-scale refined optical flow-guided video denoising method, which is more robust to different noise levels.
Ranked #1 on
Video Denoising
on VideoLQ
1 code implementation • 28 Jul 2022 • Bin Xia, Yapeng Tian, Yulun Zhang, Yucheng Hang, Wenming Yang, Qingmin Liao
The most of CNN based super-resolution (SR) methods assume that the degradation is known (\eg, bicubic).
1 code implementation • 25 Jul 2022 • JieZhang Cao, Jingyun Liang, Kai Zhang, Yawei Li, Yulun Zhang, Wenguan Wang, Luc van Gool
Reference-based image super-resolution (RefSR) aims to exploit auxiliary reference (Ref) images to super-resolve low-resolution (LR) images.
1 code implementation • 21 Jul 2022 • JieZhang Cao, Jingyun Liang, Kai Zhang, Wenguan Wang, Qin Wang, Yulun Zhang, Hao Tang, Luc van Gool
These issues can be alleviated by a cascade of three separate sub-tasks, including video deblurring, frame interpolation, and super-resolution, which, however, would fail to capture the spatial and temporal correlations among video sequences.
Ranked #6 on
Video Super-Resolution
on REDS4- 4x upscaling
1 code implementation • CVPR 2023 • Bin Xia, Jingwen He, Yulun Zhang, Yitong Wang, Yapeng Tian, Wenming Yang, Luc van Gool
In SSL, we design pruning schemes for several key components in VSR models, including residual blocks, recurrent networks, and upsampling networks.
1 code implementation • 20 May 2022 • Jing Lin, Xiaowan Hu, Yuanhao Cai, Haoqian Wang, Youliang Yan, Xueyi Zou, Yulun Zhang, Luc van Gool
On the other hand, we equip the sequence-to-sequence model with an unsupervised optical flow estimator to maximize its potential.
Ranked #2 on
Video Enhancement
on MFQE v2
1 code implementation • 20 May 2022 • Yuanhao Cai, Jing Lin, Haoqian Wang, Xin Yuan, Henghui Ding, Yulun Zhang, Radu Timofte, Luc van Gool
In coded aperture snapshot spectral compressive imaging (CASSI) systems, hyperspectral image (HSI) reconstruction methods are employed to recover the spatial-spectral signal from a compressed measurement.
Ranked #1 on
Spectral Reconstruction
on Real HSI
3 code implementations • 17 Apr 2022 • Yuanhao Cai, Jing Lin, Zudi Lin, Haoqian Wang, Yulun Zhang, Hanspeter Pfister, Radu Timofte, Luc van Gool
Existing leading methods for spectral reconstruction (SR) focus on designing deeper or wider convolutional neural networks (CNNs) to learn the end-to-end mapping from the RGB image to its hyperspectral image (HSI).
Ranked #1 on
Spectral Reconstruction
on ARAD-1K
2 code implementations • NeurIPS 2021 • Yuanhao Cai, Xiaowan Hu, Haoqian Wang, Yulun Zhang, Hanspeter Pfister, Donglai Wei
Additionally, for better noise fitting, we present an efficient architecture Simple Multi-scale Network (SMNet) as the generator.
Ranked #1 on
Noise Estimation
on SIDD
no code implementations • 29 Mar 2022 • Chaowei Fang, Dingwen Zhang, Liang Wang, Yulun Zhang, Lechao Cheng, Junwei Han
Improving the resolution of magnetic resonance (MR) image data is critical to computer-aided diagnosis and brain function analysis.
2 code implementations • 24 Mar 2022 • Kai Zhang, Yawei Li, Jingyun Liang, JieZhang Cao, Yulun Zhang, Hao Tang, Deng-Ping Fan, Radu Timofte, Luc van Gool
While recent years have witnessed a dramatic upsurge of exploiting deep neural networks toward solving image denoising, existing methods mostly rely on simple noise assumptions, such as additive white Gaussian noise (AWGN), JPEG compression noise and camera sensor noise, and a general-purpose blind denoising method for real images remains unsolved.
Ranked #1 on
Image Denoising
on Urban100 sigma50
1 code implementation • 16 Mar 2022 • Bin Sun, Yulun Zhang, Songyao Jiang, Yun Fu
In this paper, we propose a novel Hybrid Pixel-Unshuffled Network (HPUN) by introducing an efficient and effective downsampling module into the SR task.
1 code implementation • 9 Mar 2022 • Yuanhao Cai, Jing Lin, Xiaowan Hu, Haoqian Wang, Xin Yuan, Yulun Zhang, Radu Timofte, Luc van Gool
Many algorithms have been developed to solve the inverse problem of coded aperture snapshot spectral imaging (CASSI), i. e., recovering the 3D hyperspectral images (HSIs) from a 2D compressive measurement.
Ranked #4 on
Spectral Reconstruction
on Real HSI
2 code implementations • CVPR 2022 • Xiaowan Hu, Yuanhao Cai, Jing Lin, Haoqian Wang, Xin Yuan, Yulun Zhang, Radu Timofte, Luc van Gool
On the one hand, the proposed HR spatial-spectral attention module with its efficient feature fusion provides continuous and fine pixel-level features.
Ranked #7 on
Spectral Reconstruction
on Real HSI
6 code implementations • 27 Jan 2022 • Zudi Lin, Prateek Garg, Atmadeep Banerjee, Salma Abdel Magid, Deqing Sun, Yulun Zhang, Luc van Gool, Donglai Wei, Hanspeter Pfister
Image super-resolution (SR) is a fast-moving field with novel architectures attracting the spotlight.
1 code implementation • 6 Jan 2022 • Jing Lin, Yuanhao Cai, Xiaowan Hu, Haoqian Wang, Youliang Yan, Xueyi Zou, Henghui Ding, Yulun Zhang, Radu Timofte, Luc van Gool
Exploiting similar and sharper scene patches in spatio-temporal neighborhoods is critical for video deblurring.
Ranked #1 on
Deblurring
on DVD
no code implementations • CVPR 2022 • Salma Abdel Magid, Zudi Lin, Donglai Wei, Yulun Zhang, Jinjin Gu, Hanspeter Pfister
Our key contribution is to leverage a texture classifier, which enables us to assign patches with semantic labels, to identify the source of SR errors both globally and locally.
1 code implementation • 31 Dec 2021 • Jiamian Wang, Yulun Zhang, Xin Yuan, Ziyi Meng, Zhiqiang Tao
Recently, hyperspectral imaging (HSI) has attracted increasing research attention, especially for the ones based on a coded aperture snapshot spectral imaging (CASSI) system.
1 code implementation • 7 Dec 2021 • Yulun Zhang, Matthew C. Fontaine, Amy K. Hoover, Stefanos Nikolaidis
In a Hearthstone deckbuilding case study, we show that our approach improves the sample efficiency of MAP-Elites and outperforms a model trained offline with random decks, as well as a linear surrogate model baseline, setting a new state-of-the-art for quality diversity approaches in automated Hearthstone deckbuilding.
1 code implementation • NeurIPS 2021 • Can Qin, Handong Zhao, Lichen Wang, Huan Wang, Yulun Zhang, Yun Fu
For slow learning of graph similarity, this paper proposes a novel early-fusion approach by designing a co-attention-based feature fusion network on multilevel GNN features.
1 code implementation • NeurIPS 2021 • Yulun Zhang, Huan Wang, Can Qin, Yun Fu
To address the above issues, we propose aligned structured sparsity learning (ASSL), which introduces a weight normalization layer and applies $L_2$ regularization to the scale parameters for sparsity.
4 code implementations • CVPR 2022 • Yuanhao Cai, Jing Lin, Xiaowan Hu, Haoqian Wang, Xin Yuan, Yulun Zhang, Radu Timofte, Luc van Gool
The HSI representations are highly similar and correlated across the spectral dimension.
Ranked #2 on
Spectral Reconstruction
on ARAD-1K
no code implementations • 19 Oct 2021 • K. R. Zentner, Ryan Julian, Ujjwal Puri, Yulun Zhang, Gaurav S. Sukhatme
We take a fresh look at this problem, by considering a setting in which the robot is limited to storing that knowledge and experience only in the form of learned skill policies.
no code implementations • 29 Sep 2021 • Jiamian Wang, Yulun Zhang, Xin Yuan, Yun Fu, Zhiqiang Tao
As the inverse process of snapshot compressive imaging, the hyperspectral image (HSI) reconstruction takes the 2D measurement as input and posteriorly retrieves the captured 3D spatial-spectral signal.
no code implementations • 29 Sep 2021 • Yi Xu, Lichen Wang, Yizhou Wang, Can Qin, Yulun Zhang, Yun Fu
In this paper, we propose a novel framework, MemREIN, which considers Memorized, Restitution, and Instance Normalization for cross-domain few-shot learning.
no code implementations • ICLR 2022 • Yulun Zhang, Huan Wang, Can Qin, Yun Fu
Specifically, for the layers connected by the same residual, we select the filters of the same indices as unimportant filters.
1 code implementation • 17 Aug 2021 • Jiamian Wang, Yulun Zhang, Xin Yuan, Yun Fu, Zhiqiang Tao
The emerging technology of snapshot compressive imaging (SCI) enables capturing high dimensional (HD) data in an efficient way.
no code implementations • 24 Jun 2021 • K. R. Zentner, Ryan Julian, Ujjwal Puri, Yulun Zhang, Gaurav Sukhatme
We explore possible methods for multi-task transfer learning which seek to exploit the shared physical structure of robotics tasks.
1 code implementation • 22 Jun 2021 • Yizhou Wang, Yue Kang, Can Qin, Huan Wang, Yi Xu, Yulun Zhang, Yun Fu
The intuition is that gradient with momentum contains more accurate directional information and therefore its second moment estimation is a more favorable option for learning rate scaling than that of the raw gradient.
1 code implementation • 21 Jun 2021 • Matthew C. Fontaine, Ya-Chuan Hsu, Yulun Zhang, Bryon Tjanaka, Stefanos Nikolaidis
When studying robots collaborating with humans, much of the focus has been on robot policies that coordinate fluently with human teammates in collaborative tasks.
no code implementations • CVPR 2021 • Yulun Zhang, Kai Li, Kunpeng Li, Yun Fu
They also fail to sense the entire space of the input, which is critical for high-quality MR image SR. To address those problems, we propose squeeze and excitation reasoning attention networks (SERAN) for accurate MR image SR. We propose to squeeze attention from global spatial information of the input and obtain global descriptors.
Ranked #2 on
Image Super-Resolution
on IXI
no code implementations • CVPR 2021 • Xiaowan Hu, Ruijun Ma, Zhihong Liu, Yuanhao Cai, Xiaole Zhao, Yulun Zhang, Haoqian Wang
The extraction of auto-correlation in images has shown great potential in deep learning networks, such as the self-attention mechanism in the channel domain and the self-similarity mechanism in the spatial domain.
1 code implementation • ICCV 2021 • Kai Li, Chang Liu, Handong Zhao, Yulun Zhang, Yun Fu
This paper studies Semi-Supervised Domain Adaptation (SSDA), a practical yet under-investigated research topic that aims to learn a model of good performance using unlabeled samples and a few labeled samples in the target domain, with the help of labeled samples from a source domain.
1 code implementation • 15 Apr 2021 • Xiaoyu Xiang, Yapeng Tian, Yulun Zhang, Yun Fu, Jan P. Allebach, Chenliang Xu
A na\"ive method is to decompose it into two sub-tasks: video frame interpolation (VFI) and video super-resolution (VSR).
Space-time Video Super-resolution
Video Frame Interpolation
+1
2 code implementations • 11 Mar 2021 • Huan Wang, Can Qin, Yue Bai, Yulun Zhang, Yun Fu
Neural network pruning typically removes connections or neurons from a pretrained converged model; while a new pruning paradigm, pruning at initialization (PaI), attempts to prune a randomly initialized network.
1 code implementation • 14 Jan 2021 • Weihao Xia, Yulun Zhang, Yujiu Yang, Jing-Hao Xue, Bolei Zhou, Ming-Hsuan Yang
GAN inversion aims to invert a given image back into the latent space of a pretrained GAN model, for the image to be faithfully reconstructed from the inverted code by the generator.
no code implementations • ICCV 2021 • Yulun Zhang, Donglai Wei, Can Qin, Huan Wang, Hanspeter Pfister, Yun Fu
However, the basic convolutional layer in CNNs is designed to extract local patterns, lacking the ability to model global context.
no code implementations • ICCV 2021 • Salma Abdel Magid, Yulun Zhang, Donglai Wei, Won-Dong Jang, Zudi Lin, Yun Fu, Hanspeter Pfister
Specifically, we propose a dynamic high-pass filtering (HPF) module that locally applies adaptive filter weights for each spatial location and channel group to preserve high-frequency signals.
1 code implementation • ICLR 2021 • Huan Wang, Can Qin, Yulun Zhang, Yun Fu
Regularization has long been utilized to learn sparsity in deep neural network pruning.
1 code implementation • NeurIPS 2020 • Yuchen Fan, Jiahui Yu, Yiqun Mei, Yulun Zhang, Yun Fu, Ding Liu, Thomas S. Huang
Inspired by the robustness and efficiency of sparse representation in sparse coding based image restoration models, we investigate the sparsity of neurons in deep networks.
2 code implementations • 28 Apr 2020 • Yiqun Mei, Yuchen Fan, Yulun Zhang, Jiahui Yu, Yuqian Zhou, Ding Liu, Yun Fu, Thomas S. Huang, Humphrey Shi
Self-similarity refers to the image prior widely used in image restoration algorithms that small but similar patterns tend to occur at different locations and scales.
1 code implementation • CVPR 2020 • Kai Li, Yulun Zhang, Kunpeng Li, Yun Fu
The recent flourish of deep learning in various tasks is largely accredited to the rich and accessible labeled data.
3 code implementations • CVPR 2020 • Xiaoyu Xiang, Yapeng Tian, Yulun Zhang, Yun Fu, Jan P. Allebach, Chenliang Xu
Rather than synthesizing missing LR video frames as VFI networks do, we firstly temporally interpolate LR frame features in missing LR video frames capturing local temporal contexts by the proposed feature temporal interpolation network.
Ranked #4 on
Video Frame Interpolation
on Vid4 - 4x upscaling
Space-time Video Super-resolution
Video Frame Interpolation
+1
no code implementations • ECCV 2020 • Yulun Zhang, Zhifei Zhang, Stephen DiVerdi, Zhaowen Wang, Jose Echevarria, Yun Fu
We aim to super-resolve digital paintings, synthesizing realistic details from high-resolution reference painting materials for very large scaling factors (e. g., 8X, 16X).
1 code implementation • 19 Nov 2019 • Yu Yin, Joseph P. Robinson, Yulun Zhang, Yun Fu
As for SR, the proposed method recovers sharper edges and more details from LR face images than other state-of-the-art methods, which we demonstrate qualitatively and quantitatively.
2 code implementations • ICCV 2019 • Kunpeng Li, Yulun Zhang, Kai Li, Yuanyuan Li, Yun Fu
It outperforms the current best method by 6. 8% relatively for image retrieval and 4. 8% relatively for caption retrieval on MS-COCO (Recall@1 using 1K test set).
Ranked #8 on
Image Retrieval
on Flickr30K 1K test
1 code implementation • 7 Jul 2019 • Xiaole Zhao, Ying Liao, Tian He, Yulun Zhang, Yadong Wu, Tao Zhang
Most current image super-resolution (SR) methods based on convolutional neural networks (CNNs) use residual learning in network structural design, which favors to effective back propagation and hence improves SR performance by increasing model scale.
2 code implementations • ICCV 2019 • Yulun Zhang, Chen Fang, Yilin Wang, Zhaowen Wang, Zhe Lin, Yun Fu, Jimei Yang
An assumption widely used in recent neural style transfer methods is that image styles can be described by global statics of deep features like Gram or covariance matrices.
2 code implementations • ICLR 2019 • Yulun Zhang, Kunpeng Li, Kai Li, Bineng Zhong, Yun Fu
To address this issue, we design local and non-local attention blocks to extract features that capture the long-range dependencies between pixels and pay more attention to the challenging parts.
3 code implementations • 25 Dec 2018 • Yulun Zhang, Yapeng Tian, Yu Kong, Bineng Zhong, Yun Fu
We fully exploit the hierarchical features from all the convolutional layers.
Ranked #1 on
Color Image Denoising
on Kodak24 sigma30
2 code implementations • 7 Dec 2018 • Yapeng Tian, Yulun Zhang, Yun Fu, Chenliang Xu
Video super-resolution (VSR) aims to restore a photo-realistic high-resolution (HR) video frame from both its corresponding low-resolution (LR) frame (reference frame) and multiple neighboring frames (supporting frames).
no code implementations • 15 Oct 2018 • Xiaole Zhao, Yulun Zhang, Tao Zhang, Xueming Zou
The proposed CSN model divides the hierarchical features into two branches, i. e., residual branch and dense branch, with different information transmissions.
Ranked #3 on
Image Super-Resolution
on IXI
1 code implementation • 18 Aug 2018 • Kai Li, Zhengming Ding, Kunpeng Li, Yulun Zhang, Yun Fu
To ensure scalability and separability, a softmax-like function is formulated to push apart the positive and negative support sets.
20 code implementations • ECCV 2018 • Yulun Zhang, Kunpeng Li, Kai Li, Lichen Wang, Bineng Zhong, Yun Fu
To solve these problems, we propose the very deep residual channel attention networks (RCAN).
Ranked #21 on
Image Super-Resolution
on BSD100 - 4x upscaling
16 code implementations • CVPR 2018 • Yulun Zhang, Yapeng Tian, Yu Kong, Bineng Zhong, Yun Fu
In this paper, we propose a novel residual dense network (RDN) to address this problem in image SR. We fully exploit the hierarchical features from all the convolutional layers.
Ranked #5 on
Image Super-Resolution
on IXI