Search Results for author: Jianyu Wei

Found 2 papers, 2 papers with code

AFPQ: Asymmetric Floating Point Quantization for LLMs

1 code implementation3 Nov 2023 Yijia Zhang, Sicheng Zhang, Shijie Cao, Dayou Du, Jianyu Wei, Ting Cao, Ningyi Xu

Large language models (LLMs) show great performance in various tasks, but face deployment challenges from limited memory capacity and bandwidth.

Quantization

Pre-gated MoE: An Algorithm-System Co-Design for Fast and Scalable Mixture-of-Expert Inference

1 code implementation23 Aug 2023 Ranggi Hwang, Jianyu Wei, Shijie Cao, Changho Hwang, Xiaohu Tang, Ting Cao, Mao Yang

To tackle the high compute requirements of LLMs, the Mixture-of-Experts (MoE) architecture was introduced which is able to scale its model size without proportionally scaling up its computational requirements.

Cannot find the paper you are looking for? You can Submit a new open access paper.