no code implementations • EMNLP 2020 • Yuanmeng Yan, Keqing He, Hong Xu, Sihong Liu, Fanyu Meng, Min Hu, Weiran Xu
Open-vocabulary slots, such as file name, album name, or schedule title, significantly degrade the performance of neural-based slot filling models since these slots can take on values from a virtually unlimited set and have no semantic restriction nor a length limit.
no code implementations • Findings (EMNLP) 2021 • Lulu Zhao, Weihao Zeng, Weiran Xu, Jun Guo
Abstractive dialogue summarization suffers from a lots of factual errors, which are due to scattered salient elements in the multi-speaker information interaction process.
1 code implementation • Findings (EMNLP) 2021 • Yuejie Lei, Fujia Zheng, Yuanmeng Yan, Keqing He, Weiran Xu
Although abstractive summarization models have achieved impressive results on document summarization tasks, their performance on dialogue modeling is much less satisfactory due to the crude and straight methods for dialogue encoding.
Abstractive Dialogue Summarization
Abstractive Text Summarization
+1
1 code implementation • ACL 2022 • Yutao Mou, Keqing He, Yanan Wu, Zhiyuan Zeng, Hong Xu, Huixing Jiang, Wei Wu, Weiran Xu
Discovering Out-of-Domain(OOD) intents is essential for developing new skills in a task-oriented dialogue system.
no code implementations • EMNLP 2021 • Zhiyuan Zeng, Jiaze Chen, Weiran Xu, Lei LI
Based on the artificial dataset, we train an evaluation model that can not only make accurate and robust factual consistency discrimination but is also capable of making interpretable factual errors tracing by backpropagated gradient distribution on token embeddings.
1 code implementation • EMNLP 2021 • Yuanmeng Yan, Rumei Li, Sirui Wang, Hongzhi Zhang, Zan Daoguang, Fuzheng Zhang, Wei Wu, Weiran Xu
The key challenge of question answering over knowledge bases (KBQA) is the inconsistency between the natural language questions and the reasoning paths in the knowledge base (KB).
no code implementations • 26 Apr 2022 • Xuefeng Li, Hao Lei, LiWen Wang, Guanting Dong, Jinzheng Zhao, Jiachi Liu, Weiran Xu, Chunyun Zhang
In this paper, we propose a robust contrastive alignment method to align text classification features of various domains in the same feature space by supervised contrastive learning.
1 code implementation • 9 Apr 2022 • Lulu Zhao, Fujia Zheng, Weihao Zeng, Keqing He, Weiran Xu, Huixing Jiang, Wei Wu, Yanan Wu
The most advanced abstractive dialogue summarizers lack generalization ability on new domains and the existing researches for domain adaptation in summarization generally rely on large-scale pre-trainings.
no code implementations • 8 Mar 2022 • LiWen Wang, Rumei Li, Yang Yan, Yuanmeng Yan, Sirui Wang, Wei Wu, Weiran Xu
Recently, prompt-based methods have achieved significant performance in few-shot learning scenarios by bridging the gap between language model pre-training and fine-tuning for downstream tasks.
no code implementations • 25 Oct 2021 • Lulu Zhao, Fujia Zheng, Keqing He, Weihao Zeng, Yuejie Lei, Huixing Jiang, Wei Wu, Weiran Xu, Jun Guo, Fanyu Meng
Previous dialogue summarization datasets mainly focus on open-domain chitchat dialogues, while summarization datasets for the broadly used task-oriented dialogue haven't been explored yet.
1 code implementation • EMNLP 2021 • LiWen Wang, Xuefeng Li, Jiachi Liu, Keqing He, Yuanmeng Yan, Weiran Xu
Zero-shot cross-domain slot filling alleviates the data dependence in the case of data scarcity in the target domain, which has aroused extensive research.
1 code implementation • NAACL 2021 • Zhiyuan Zeng, Keqing He, Yuanmeng Yan, Hong Xu, Weiran Xu
Detecting out-of-domain (OOD) intents is crucial for the deployed task-oriented dialogue system.
1 code implementation • NAACL 2021 • LiWen Wang, Yuanmeng Yan, Keqing He, Yanan Wu, Weiran Xu
In this paper, we propose an adversarial disentangled debiasing model to dynamically decouple social bias attributes from the intermediate representations trained on the main task.
1 code implementation • ACL 2021 • Zhiyuan Zeng, Keqing He, Yuanmeng Yan, Zijun Liu, Yanan Wu, Hong Xu, Huixing Jiang, Weiran Xu
Detecting Out-of-Domain (OOD) or unknown intents from user queries is essential in a task-oriented dialog system.
1 code implementation • ACL 2021 • Yanan Wu, Zhiyuan Zeng, Keqing He, Hong Xu, Yuanmeng Yan, Huixing Jiang, Weiran Xu
Existing slot filling models can only recognize pre-defined in-domain slot types from a limited slot set.
1 code implementation • ACL 2021 • Yuanmeng Yan, Rumei Li, Sirui Wang, Fuzheng Zhang, Wei Wu, Weiran Xu
Learning high-quality sentence representations benefits a wide range of natural language processing tasks.
no code implementations • 1 Jan 2021 • Lulu Zhao, Zeyuan Yang, Weiran Xu, Sheng Gao, Jun Guo
In this paper, we present a Knowledge Graph Enhanced Dual-Copy network (KGEDC), a novel framework for abstractive dialogue summarization with conversational structure and factual knowledge.
no code implementations • COLING 2020 • Hong Xu, Keqing He, Yuanmeng Yan, Sihong Liu, Zijun Liu, Weiran Xu
Detecting out-of-domain (OOD) input intents is critical in the task-oriented dialog system.
no code implementations • COLING 2020 • Keqing He, Jinchao Zhang, Yuanmeng Yan, Weiran Xu, Cheng Niu, Jie zhou
In this paper, we propose a Contrastive Zero-Shot Learning with Adversarial Attack (CZSL-Adv) method for the cross-domain slot filling.
no code implementations • COLING 2020 • Lulu Zhao, Weiran Xu, Jun Guo
A masked graph self-attention mechanism is used to integrate cross-sentence information flows and focus more on the related utterances, which makes it better to understand the dialogue.
no code implementations • ACL 2020 • Keqing He, Yuanmeng Yan, Weiran Xu
Neural-based context-aware models for slot tagging have achieved state-of-the-art performance.
1 code implementation • 8 Jan 2020 • Pengda Qin, Xin Wang, Wenhu Chen, Chunyun Zhang, Weiran Xu, William Yang Wang
Large-scale knowledge graphs (KGs) are shown to become more important in current information systems.
no code implementations • 3 Sep 2019 • Yuanyuan Qi, Jiayue Zhang, Weiran Xu, Jun Guo
In this paper, we propose a salient-context based semantic matching method to improve relevance ranking in information retrieval.
no code implementations • NAACL 2018 • Chenliang Li, Weiran Xu, Si Li, Sheng Gao
Then, we introduce a Key Information Guide Network (KIGN), which encodes the keywords to the key information representation, to guide the process of generation.
Ranked #10 on
Text Summarization
on CNN / Daily Mail (Anonymized)
no code implementations • ACL 2018 • Pengda Qin, Weiran Xu, William Yang Wang
Distant supervision can effectively label data for relation extraction, but suffers from the noise labeling problem.
2 code implementations • ACL 2018 • Pengda Qin, Weiran Xu, William Yang Wang
The experimental results show that the proposed strategy significantly improves the performance of distant supervision comparing to state-of-the-art systems.
no code implementations • WS 2017 • Zuyi Bao, Si Li, Weiran Xu, Sheng Gao
For Chinese word segmentation, the large-scale annotated corpora mainly focus on newswire and only a handful of annotated data is available in other domains such as patents and literature.
no code implementations • WS 2017 • Dongyun Liang, Weiran Xu, Yinge Zhao
Word representation models have achieved great success in natural language processing tasks, such as relation classification.
no code implementations • COLING 2016 • Dongxu Zhang, Boliang Zhang, Xiaoman Pan, Xiaocheng Feng, Heng Ji, Weiran Xu
Instead of directly relying on word alignment results, this framework combines advantages of rule-based methods and deep learning methods by implementing two steps: First, generates a high-confidence entity annotation set on IL side with strict searching methods; Second, uses this high-confidence set to weakly supervise the model training.