Search Results for author: Yuanmeng Yan

Found 16 papers, 10 papers with code

Dynamically Disentangling Social Bias from Task-Oriented Representations with Adversarial Attack

1 code implementation NAACL 2021 LiWen Wang, Yuanmeng Yan, Keqing He, Yanan Wu, Weiran Xu

In this paper, we propose an adversarial disentangled debiasing model to dynamically decouple social bias attributes from the intermediate representations trained on the main task.

Adversarial Attack Representation Learning

InstructionNER: A Multi-Task Instruction-Based Generative Framework for Few-shot NER

1 code implementation8 Mar 2022 LiWen Wang, Rumei Li, Yang Yan, Yuanmeng Yan, Sirui Wang, Wei Wu, Weiran Xu

Recently, prompt-based methods have achieved significant performance in few-shot learning scenarios by bridging the gap between language model pre-training and fine-tuning for downstream tasks.

Entity Typing Few-Shot Learning +5

Rethink about the Word-level Quality Estimation for Machine Translation from Human Judgement

1 code implementation13 Sep 2022 Zhen Yang, Fandong Meng, Yuanmeng Yan, Jie zhou

While the post-editing effort can be used to measure the translation quality to some extent, we find it usually conflicts with the human judgement on whether the word is well or poorly translated.

Machine Translation Sentence +2

Disentangling Confidence Score Distribution for Out-of-Domain Intent Detection with Energy-Based Learning

no code implementations17 Oct 2022 Yanan Wu, Zhiyuan Zeng, Keqing He, Yutao Mou, Pei Wang, Yuanmeng Yan, Weiran Xu

In this paper, we propose a simple but strong energy-based score function to detect OOD where the energy scores of OOD samples are higher than IND samples.

Intent Detection Out of Distribution (OOD) Detection

Adversarial Semantic Decoupling for Recognizing Open-Vocabulary Slots

no code implementations EMNLP 2020 Yuanmeng Yan, Keqing He, Hong Xu, Sihong Liu, Fanyu Meng, Min Hu, Weiran Xu

Open-vocabulary slots, such as file name, album name, or schedule title, significantly degrade the performance of neural-based slot filling models since these slots can take on values from a virtually unlimited set and have no semantic restriction nor a length limit.

Sentence slot-filling +1

A Finer-grain Universal Dialogue Semantic Structures based Model For Abstractive Dialogue Summarization

no code implementations Findings (EMNLP) 2021 Yuejie Lei, Fujia Zheng, Yuanmeng Yan, Keqing He, Weiran Xu

Although abstractive summarization models have achieved impressive results on document summarization tasks, their performance on dialogue modeling is much less satisfactory due to the crude and straight methods for dialogue encoding.

Abstractive Dialogue Summarization Abstractive Text Summarization +1

Large-Scale Relation Learning for Question Answering over Knowledge Bases with Pre-trained Language Models

1 code implementation EMNLP 2021 Yuanmeng Yan, Rumei Li, Sirui Wang, Hongzhi Zhang, Zan Daoguang, Fuzheng Zhang, Wei Wu, Weiran Xu

The key challenge of question answering over knowledge bases (KBQA) is the inconsistency between the natural language questions and the reasoning paths in the knowledge base (KB).

Question Answering Relation +2

Cannot find the paper you are looking for? You can Submit a new open access paper.