Search Results for author: Yuechi Zhou

Found 2 papers, 2 papers with code

OpenBA-V2: Reaching 77.3% High Compression Ratio with Fast Multi-Stage Pruning

1 code implementation9 May 2024 Dan Qiao, Yi Su, Pinzheng Wang, Jing Ye, Wenjing Xie, Yuechi Zhou, Yuyang Ding, Zecheng Tang, Jikai Wang, Yixin Ji, Yue Wang, Pei Guo, Zechen Sun, Zikang Zhang, Juntao Li, Pingfu Chao, Wenliang Chen, Guohong Fu, Guodong Zhou, Qiaoming Zhu, Min Zhang

Large Language Models (LLMs) have played an important role in many fields due to their powerful capabilities. However, their massive number of parameters leads to high deployment requirements and incurs significant inference costs, which impedes their practical applications.

Chinese grammatical error correction based on knowledge distillation

2 code implementations31 Jul 2022 Peng Xia, Yuechi Zhou, Ziyan Zhang, Zecheng Tang, Juntao Li

In view of the poor robustness of existing Chinese grammatical error correction models on attack test sets and large model parameters, this paper uses the method of knowledge distillation to compress model parameters and improve the anti-attack ability of the model.

Grammatical Error Correction Knowledge Distillation

Cannot find the paper you are looking for? You can Submit a new open access paper.