1 code implementation • 29 Sep 2024 • Kun Cheng, Lei Yu, Zhijun Tu, Xiao He, Liyu Chen, Yong Guo, Mingrui Zhu, Nannan Wang, Xinbo Gao, Jie Hu
In this work, we design an effective diffusion transformer for image super-resolution (DiT-SR) that achieves the visual quality of prior-based methods, but through a training-from-scratch manner.
1 code implementation • 14 Aug 2024 • Xiao He, Huaao Tang, Zhijun Tu, Junchao Zhang, Kun Cheng, Hanting Chen, Yong Guo, Mingrui Zhu, Nannan Wang, Xinbo Gao, Jie Hu
Specifically, we introduce a novel score distillation strategy to align the data distribution between the outputs of the student and teacher models after minor noise perturbation.
1 code implementation • 4 May 2024 • Yuchuan Tian, Zhijun Tu, Hanting Chen, Jie Hu, Chao Xu, Yunhe Wang
Diffusion Transformers (DiTs) introduce the transformer architecture to diffusion tasks for latent-space image generation.
no code implementations • 9 Apr 2024 • Junbo Qiao, Wei Li, Haizhen Xie, Hanting Chen, Yunshuai Zhou, Zhijun Tu, Jie Hu, Shaohui Lin
Extensive experiments on multiple image processing tasks (e. g., image super-resolution (SR), JPEG artifact reduction, and image denoising) demonstrate the superiority of LIPT on both latency and PSNR.
no code implementations • 31 Mar 2024 • Zhijun Tu, Kunpeng Du, Hanting Chen, Hailing Wang, Wei Li, Jie Hu, Yunhe Wang
Recent advances have demonstrated the powerful capability of transformer architecture in image restoration.
no code implementations • 5 Feb 2024 • Yehui Tang, Yunhe Wang, Jianyuan Guo, Zhijun Tu, Kai Han, Hailin Hu, DaCheng Tao
Model compression methods reduce the memory and computational cost of Transformer, which is a necessary step to implement large language/vision models on practical devices.
no code implementations • 13 Dec 2023 • Xin Ding, Xiaoyu Liu, Zhijun Tu, Yun Zhang, Wei Li, Jie Hu, Hanting Chen, Yehui Tang, Zhiwei Xiong, Baoqun Yin, Yunhe Wang
Post-training quantization (PTQ) has played a key role in compressing large language models (LLMs) with ultra-low costs.
1 code implementation • 25 Sep 2023 • Yun Zhang, Wei Li, Simiao Li, Hanting Chen, Zhijun Tu, Wenjia Wang, BingYi Jing, Shaohui Lin, Jie Hu
Knowledge distillation (KD) compresses deep neural networks by transferring task-related knowledge from cumbersome pre-trained teacher models to compact student models.
Ranked #27 on Image Super-Resolution on Urban100 - 4x upscaling
2 code implementations • CVPR 2023 • Zhijun Tu, Jie Hu, Hanting Chen, Yunhe Wang
In this paper, we study post-training quantization(PTQ) for image super resolution using only a few unlabeled calibration images.
3 code implementations • 17 Aug 2022 • Zhijun Tu, Xinghao Chen, Pengju Ren, Yunhe Wang
Since the modern deep neural networks are of sophisticated design with complex architecture for the accuracy reason, the diversity on distributions of weights and activations is very high.
no code implementations • 25 Jan 2021 • Qiangwei Yin, Zhijun Tu, Chunsheng Gong, Yang Fu, Shaohua Yan, Hechang Lei
We report the discovery of superconductivity and detailed normal-state physical properties of RbV3Sb5 single crystals with V kagome lattice.
Superconductivity Materials Science