Search Results for author: Zongyuan Zhan

Found 3 papers, 1 papers with code

FastAttention: Extend FlashAttention2 to NPUs and Low-resource GPUs

no code implementations22 Oct 2024 Haoran Lin, Xianzhi Yu, Kang Zhao, Lu Hou, Zongyuan Zhan, Stanislav Kamenev, Han Bao, Ting Hu, Mingkai Wang, Qixin Chang, Siyue Sui, Weihao Sun, Jiaxin Hu, Jun Yao, Zekun Yin, Cheng Qian, Ying Zhang, Yinfei Pan, Yu Yang, Weiguo Liu

In this work, we propose FastAttention which pioneers the adaptation of FlashAttention series for NPUs and low-resource GPUs to boost LLM inference efficiency.

Lformer: Text-to-Image Generation with L-shape Block Parallel Decoding

no code implementations7 Mar 2023 Jiacheng Li, Longhui Wei, Zongyuan Zhan, Xin He, Siliang Tang, Qi Tian, Yueting Zhuang

To better accelerate the generative transformers while keeping good generation quality, we propose Lformer, a semi-autoregressive text-to-image generation model.

Diversity Text-to-Image Generation

Component Divide-and-Conquer for Real-World Image Super-Resolution

1 code implementation ECCV 2020 Pengxu Wei, Ziwei Xie, Hannan Lu, Zongyuan Zhan, Qixiang Ye, WangMeng Zuo, Liang Lin

Learning an SR model with conventional pixel-wise loss usually is easily dominated by flat regions and edges, and fails to infer realistic details of complex textures.

Image Super-Resolution

Cannot find the paper you are looking for? You can Submit a new open access paper.