1 code implementation • 5 Oct 2024 • Houquan Zhou, Zhenghua Li, Bo Zhang, Chen Li, Shaopeng Lai, Ji Zhang, Fei Huang, Min Zhang
This work proposes a simple training-free prompt-free approach to leverage large language models (LLMs) for the Chinese spelling correction (CSC) task, which is totally different from all previous CSC approaches.
1 code implementation • 24 Jun 2024 • Hao Yue, Shaopeng Lai, Chengyi Yang, Liang Zhang, Junfeng Yao, Jinsong Su
However, these studies ignore the non-bridge entities, each of which co-occurs with only one target entity and offers the semantic association between target entities for relation prediction.
1 code implementation • Findings (ACL) 2022 • Shaopeng Lai, Qingyu Zhou, Jiali Zeng, Zhongli Li, Chao Li, Yunbo Cao, Jinsong Su
First, they simply mix additionally-constructed training instances and original ones to train models, which fails to help models be explicitly aware of the procedure of gradual corrections.
1 code implementation • EMNLP 2021 • Shaopeng Lai, Ante Wang, Fandong Meng, Jie zhou, Yubin Ge, Jiali Zeng, Junfeng Yao, Degen Huang, Jinsong Su
Dominant sentence ordering models can be classified into pairwise ordering models and set-to-sequence models.
1 code implementation • Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence 2021 • An-Hui Wang, Linfeng Song, Hui Jiang, Shaopeng Lai, Junfeng Yao, Min Zhang, Jinsong Su
Conversational discourse structures aim to describe how a dialogue is organised, thus they are helpful for dialogue understanding and response generation.
Ranked #3 on Discourse Parsing on STAC