Search Results for author: Yushi Huang

Found 4 papers, 2 papers with code

HarmoniCa: Harmonizing Training and Inference for Better Feature Cache in Diffusion Transformer Acceleration

no code implementations2 Oct 2024 Yushi Huang, Zining Wang, Ruihao Gong, Jing Liu, Xinjie Zhang, Jun Zhang

Upon detailed analysis, we pinpoint that these discrepancies primarily stem from two aspects: (1) Prior Timestep Disregard, where training ignores the effect of cache usage at earlier timesteps, and (2) Objective Mismatch, where the training target (align predicted noise in each timestep) deviates from the goal of inference (generate the high-quality image).

Temporal Feature Matters: A Framework for Diffusion Model Quantization

no code implementations28 Jul 2024 Yushi Huang, Ruihao Gong, Xianglong Liu, Jing Liu, Yuhang Li, Jiwen Lu, DaCheng Tao

However, unlike traditional models, diffusion models critically rely on the time-step for the multi-round denoising.

Denoising Image Generation +1

LLMC: Benchmarking Large Language Model Quantization with a Versatile Compression Toolkit

1 code implementation9 May 2024 Ruihao Gong, Yang Yong, Shiqiao Gu, Yushi Huang, Chentao Lv, Yunchen Zhang, Xianglong Liu, DaCheng Tao

Recent advancements in large language models (LLMs) are propelling us toward artificial general intelligence with their remarkable emergent abilities and reasoning capabilities.

Benchmarking Computational Efficiency +3

TFMQ-DM: Temporal Feature Maintenance Quantization for Diffusion Models

1 code implementation CVPR 2024 Yushi Huang, Ruihao Gong, Jing Liu, Tianlong Chen, Xianglong Liu

Remarkably, our quantization approach, for the first time, achieves model performance nearly on par with the full-precision model under 4-bit weight quantization.

Denoising Image Generation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.