Search Results for author: Zecheng Tang

Found 5 papers, 4 papers with code

Can Diffusion Model Achieve Better Performance in Text Generation? Bridging the Gap between Training and Inference!

1 code implementation8 May 2023 Zecheng Tang, Pinzheng Wang, Keyan Zhou, Juntao Li, Ziqiang Cao, Min Zhang

Diffusion models have been successfully adapted to text generation tasks by mapping the discrete text into the continuous space.

Text Generation

Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models

2 code implementations8 Mar 2023 Chenfei Wu, Shengming Yin, Weizhen Qi, Xiaodong Wang, Zecheng Tang, Nan Duan

To this end, We build a system called \textbf{Visual ChatGPT}, incorporating different Visual Foundation Models, to enable the user to interact with ChatGPT by 1) sending and receiving not only languages but also images 2) providing complex visual questions or visual editing instructions that require the collaboration of multiple AI models with multi-steps.

Chinese grammatical error correction based on knowledge distillation

2 code implementations31 Jul 2022 Peng Xia, Yuechi Zhou, Ziyan Zhang, Zecheng Tang, Juntao Li

In view of the poor robustness of existing Chinese grammatical error correction models on attack test sets and large model parameters, this paper uses the method of knowledge distillation to compress model parameters and improve the anti-attack ability of the model.

Grammatical Error Correction Knowledge Distillation

Cannot find the paper you are looking for? You can Submit a new open access paper.