Search Results for author: Lanxiang Hu

Found 3 papers, 1 papers with code

CLLMs: Consistency Large Language Models

1 code implementation28 Feb 2024 Siqi Kou, Lanxiang Hu, Zhezhi He, Zhijie Deng, Hao Zhang

Parallel decoding methods such as Jacobi decoding show promise for more efficient LLM inference as it breaks the sequential nature of the LLM decoding process and transforms it into parallelizable computation.

PockEngine: Sparse and Efficient Fine-tuning in a Pocket

no code implementations26 Oct 2023 Ligeng Zhu, Lanxiang Hu, Ji Lin, Wei-Chen Wang, Wei-Ming Chen, Chuang Gan, Song Han

On-device learning and efficient fine-tuning enable continuous and privacy-preserving customization (e. g., locally fine-tuning large language models on personalized data).

Privacy Preserving

Online Speculative Decoding

no code implementations11 Oct 2023 Xiaoxuan Liu, Lanxiang Hu, Peter Bailis, Ion Stoica, Zhijie Deng, Alvin Cheung, Hao Zhang

We develop a prototype of online speculative decoding based on online knowledge distillation and evaluate it using both synthetic and real query data on several popular LLMs.

Knowledge Distillation

Cannot find the paper you are looking for? You can Submit a new open access paper.