no code implementations • COLING 2022 • Xiuyu Wu, Jingsong Yu, Xu sun, Yunfang Wu
We introduce a novel position offset label prediction subtask to the encoder-decoder architecture for grammatical error correction (GEC) task.
no code implementations • COLING 2022 • Ming Zhang, Shuai Dou, Ziyang Wang, Yunfang Wu
Automatic medical question summarization can significantly help the system to understand consumer health questions and retrieve correct answers.
no code implementations • 2 Sep 2024 • Ke Chang, Hao Li, Junzhao Zhang, Yunfang Wu
Metaphor and sarcasm are common figurative expressions in people's communication, especially on the Internet or the memes popular among teenagers.
no code implementations • 17 Aug 2024 • Hsiu-Yuan Huang, Zichen Wu, Yutong Yang, Junzhao Zhang, Yunfang Wu
Nowadays, Large Language Models (LLMs) have demonstrated exceptional performance across various downstream tasks.
no code implementations • 16 Aug 2024 • Chenming Tang, Zhixiang Wang, Yunfang Wu
With the help of in-context learning (ICL), large language models (LLMs) have achieved impressive performance across various tasks.
no code implementations • 9 Aug 2024 • Chenming Tang, Zhixiang Wang, Yunfang Wu
In-context learning (ICL) greatly improves the performance of large language models (LLMs) on various down-stream tasks, where the improvement highly depends on the quality of demonstrations.
no code implementations • 4 Jun 2024 • Yida Cai, Hao Sun, Hsiu-Yuan Huang, Yunfang Wu
Through meticulous experimentation and analysis, we aim to provide insights into the strengths, limitations, and potential enhancements of existing Chinese open-source LLMs in the domain of Information Extraction within the context of NLP.
no code implementations • 3 Jun 2024 • Fanyi Qu, Hao Sun, Yunfang Wu
Our proposed unsupervised DG method offers a cost-effective framework for practical reading comprehension applications, without the need of laborious distractor annotation and costly large-size models
no code implementations • 7 Apr 2024 • Sanwoo Lee, Yida Cai, Desong Meng, Ziyang Wang, Yunfang Wu
Then, an LLM is prompted to extract trait scores from several conversational rounds, each round scoring one of the traits based on the scoring criteria.
1 code implementation • 3 Apr 2024 • Ziyang Wang, Sanwoo Lee, Hsiu-Yuan Huang, Yunfang Wu
Our proposed method establishes a new architecture for prompt tuning that sheds light on how linguistic features can be easily adapted to linguistic-related tasks.
no code implementations • 28 Mar 2024 • Chenming Tang, Fanyi Qu, Yunfang Wu
In this paper, we propose a novel ungrammatical-syntax-based in-context example selection strategy for GEC.
no code implementations • 28 Mar 2024 • Chenming Tang, Zhixiang Wang, Yunfang Wu
In-context learning (ICL) is the trending prompting strategy in the era of large language models (LLMs), where a few examples are demonstrated to evoke LLMs' power for a given task.
no code implementations • 17 Mar 2024 • Zichen Wu, Hsiu-Yuan Huang, Fanyi Qu, Yunfang Wu
To address them, we propose Mixture-of-Prompt-Experts with Block-Aware Prompt Fusion (MoPE-BAF), a novel multi-modal soft prompt framework based on the unified vision-language model (VLM).
no code implementations • 11 Mar 2024 • Ming Zhang, Ke Chang, Yunfang Wu
Multi-modal semantic understanding requires integrating information from different modalities to extract users' real intention behind words.
no code implementations • 8 Jul 2023 • Fanyi Qu, Yunfang Wu
Large-scale language models (LLMs) has shown remarkable capability in various of Natural Language Processing (NLP) tasks and attracted lots of attention recently.
1 code implementation • 24 May 2023 • Chenming Tang, Xiuyu Wu, Yunfang Wu
To this end, we explore several ensemble strategies based on strong PLMs with four sophisticated single models.
1 code implementation • 16 Jan 2023 • Rui Sun, Xiuyu Wu, Yunfang Wu
By borrowing the powerful ability of BERT, we propose a novel zero-shot error detection method to do a preliminary detection, which guides our model to attend more on the probably wrong tokens in encoding and to avoid modifying the correct tokens in generating.
no code implementations • 3 Nov 2022 • Xiuyu Wu, Yunfang Wu
To handle grammatical error correction, we design part-of-speech (POS) features and semantic class features to enhance the neural network model, and propose an auxiliary task to predict the POS sequence of the target sentence.
1 code implementation • 19 Oct 2022 • Wenbiao Li, Ziyang Wang, Yunfang Wu
For readability assessment, traditional methods mainly employ machine learning classifiers with hundreds of linguistic features.
no code implementations • COLING 2022 • Zichen Wu, Xin Jia, Fanyi Qu, Yunfang Wu
Specially, we present localness modeling with a Gaussian bias to enable the model to focus on answer-surrounded context, and propose a mask attention mechanism to make the syntactic structure of input passage accessible in question generation process.
Ranked #5 on Question Generation on SQuAD1.1
1 code implementation • 1 Sep 2022 • Ming Zhang, Shuai Dou, Ziyang Wang, Yunfang Wu
Automatic medical question summarization can significantly help the system to understand consumer health questions and retrieve correct answers.
1 code implementation • 13 Jul 2022 • Wenbiao Li, Rui Sun, Yunfang Wu
To strengthen the word boundary information, we mix the representations of the internal characters within a word.
1 code implementation • 13 Oct 2021 • Guangxiang Zhao, Wenkai Yang, Xuancheng Ren, Lei LI, Yunfang Wu, Xu sun
The conventional wisdom behind learning deep classification models is to focus on bad-classified examples and ignore well-classified examples that are far from the decision boundary.
no code implementations • EMNLP 2021 • Fanyi Qu, Xin Jia, Yunfang Wu
This paper for the first time addresses the question-answer pair generation task on the real-world examination data, and proposes a new unified framework on RACE.
no code implementations • 20 Aug 2021 • Zhiyuan Zhang, Wei Li, Ruihan Bao, Keiko Harimoto, Yunfang Wu, Xu sun
Besides the security concerns of potential adversarial examples, adversarial training can also improve the generalization ability of neural networks, train robust neural networks, and provide interpretability for neural networks.
no code implementations • CCL 2021 • Xin Jia, Hao Wang, Dawei Yin, Yunfang Wu
Question generation (QG) is to generate natural and grammatical questions that can be answered by a specific answer for a given context.
no code implementations • 28 May 2021 • Yi Zhang, Lei LI, Yunfang Wu, Qi Su, Xu sun
Knowledge facts are typically represented by relational triples, while we observe that some commonsense facts are represented by the triples whose forms are inconsistent with the expression of language.
1 code implementation • 11 Dec 2020 • Xin Jia, Wenjie Zhou, Xu sun, Yunfang Wu
Question Generation (QG) is an essential component of the automatic intelligent tutoring systems, which aims to generate high-quality questions for facilitating the reading practice and assessments.
no code implementations • Asian Chapter of the Association for Computational Linguistics 2020 • Weikang Li, Yunfang Wu
Answer selection (AS) is an important subtask of document-based question answering (DQA).
no code implementations • 28 Sep 2020 • Zhihan Zhang, Xiubo Geng, Tao Qin, Yunfang Wu, Daxin Jiang
In this work, we focus on the task of procedural text understanding, which aims to comprehend such documents and track entities' states and locations during a process.
no code implementations • ACL 2020 • Xin Jia, Wenjie Zhou, Xu sun, Yunfang Wu
Given a sentence and its relevant answer, how to ask good questions is a challenging task, which has many real applications.
no code implementations • WS 2020 • Xiuyu Wu, Nan Jiang, Yunfang Wu
The answer-agnostic question generation is a significant and challenging task, which aims to automatically generate questions for a given sentence but without an answer.
1 code implementation • 14 Apr 2020 • Siyu Duan, Wei Li, Cai Jing, Yancheng He, Yunfang Wu, Xu sun
In this paper, we propose the query-variant advertisement text generation task that aims to generate candidate advertisement texts for different web search queries with various needs based on queries and item keywords.
2 code implementations • 14 Apr 2020 • Shu Liu, Wei Li, Yunfang Wu, Qi Su, Xu sun
Target-Based Sentiment Analysis aims to detect the opinion aspects (aspect extraction) and the sentiment polarities (sentiment detection) towards them.
no code implementations • 20 Nov 2019 • Xiaorui Zhou, Senlin Luo, Yunfang Wu
Second, they didn't emphasize the relationship between the distractor and article, making the generated distractors not semantically relevant to the article and thus fail to form a set of meaningful options.
no code implementations • COLING 2016 • Wei Li, Yunfang Wu
In this paper we focus on the problem of dialog act (DA) labelling.
no code implementations • IJCNLP 2019 • Wenjie Zhou, Minghua Zhang, Yunfang Wu
Question generation is a challenging task which aims to ask a question based on an answer and relevant context.
no code implementations • IJCNLP 2019 • Wenjie Zhou, Minghua Zhang, Yunfang Wu
This paper explores the task of answer-aware questions generation.
1 code implementation • ACL 2019 • Wei Li, Jingjing Xu, Yancheng He, ShengLi Yan, Yunfang Wu, Xu sun
In this paper, we propose to generate comments with a graph-to-sequence model that models the input news as a topic interaction graph.
1 code implementation • 4 Jun 2019 • Wei Li, Jingjing Xu, Yancheng He, ShengLi Yan, Yunfang Wu, Xu sun
In this paper, we propose to generate comments with a graph-to-sequence model that models the input news as a topic interaction graph.
no code implementations • 16 May 2019 • Xiuyu Wu, Yunfang Wu
How to generate human like response is one of the most challenging tasks for artificial intelligence.
1 code implementation • EMNLP 2018 • Minghua Zhang, Yunfang Wu, Weikang Li, Wei Li
In the encoding we propose a mean-max strategy that applies both mean and max pooling operations over the hidden vectors to capture diverse information of the input.
no code implementations • 16 Aug 2018 • Wei Li, Xuancheng Ren, Damai Dai, Yunfang Wu, Houfeng Wang, Xu sun
In the experiments, we take a real-world sememe knowledge base HowNet and the corresponding descriptions of the words in Baidu Wiki for training and evaluation.
no code implementations • 9 Mar 2018 • Minghua Zhang, Yunfang Wu
In this paper, we propose a novel unsupervised framework, namely reduced attentive matching network (RAMN), to compute semantic matching between two questions.
no code implementations • 27 Jan 2018 • Wei Li, Yunfang Wu, Xueqiang Lv
Using low dimensional vector space to represent words has been very effective in many NLP tasks.
no code implementations • 17 Sep 2017 • Wei Li, Yunfang Wu
In this paper, we focus on the problem of answer triggering ad-dressed by Yang et al. (2015), which is a critical component for a real-world question answering system.