no code implementations • 28 Mar 2024 • Chenming Tang, Fanyi Qu, Yunfang Wu
In this paper, we propose a novel ungrammatical-syntax-based in-context example selection strategy for GEC.
no code implementations • 17 Mar 2024 • Zichen Wu, Hsiu-Yuan Huang, Fanyi Qu, Yunfang Wu
To address them, we propose Mixture-of-Prompt-Experts with Block-Aware Prompt Fusion (MoPE-BAF), a novel multi-modal soft prompt framework based on the unified vision-language model (VLM).
no code implementations • 8 Jul 2023 • Fanyi Qu, Yunfang Wu
Large-scale language models (LLMs) has shown remarkable capability in various of Natural Language Processing (NLP) tasks and attracted lots of attention recently.
no code implementations • COLING 2022 • Zichen Wu, Xin Jia, Fanyi Qu, Yunfang Wu
Specially, we present localness modeling with a Gaussian bias to enable the model to focus on answer-surrounded context, and propose a mask attention mechanism to make the syntactic structure of input passage accessible in question generation process.
Ranked #5 on Question Generation on SQuAD1.1
no code implementations • EMNLP 2021 • Fanyi Qu, Xin Jia, Yunfang Wu
This paper for the first time addresses the question-answer pair generation task on the real-world examination data, and proposes a new unified framework on RACE.