Search Results for author: Weiran Xu

Found 63 papers, 32 papers with code

Bitext Name Tagging for Cross-lingual Entity Annotation Projection

no code implementations COLING 2016 Dongxu Zhang, Boliang Zhang, Xiaoman Pan, Xiaocheng Feng, Heng Ji, Weiran Xu

Instead of directly relying on word alignment results, this framework combines advantages of rule-based methods and deep learning methods by implementing two steps: First, generates a high-confidence entity annotation set on IL side with strict searching methods; Second, uses this high-confidence set to weakly supervise the model training.

named-entity-recognition Named Entity Recognition +2

Neural Regularized Domain Adaptation for Chinese Word Segmentation

no code implementations WS 2017 Zuyi Bao, Si Li, Weiran Xu, Sheng Gao

For Chinese word segmentation, the large-scale annotated corpora mainly focus on newswire and only a handful of annotated data is available in other domains such as patents and literature.

Chinese Word Segmentation Domain Adaptation +3

Robust Distant Supervision Relation Extraction via Deep Reinforcement Learning

2 code implementations ACL 2018 Pengda Qin, Weiran Xu, William Yang Wang

The experimental results show that the proposed strategy significantly improves the performance of distant supervision comparing to state-of-the-art systems.

reinforcement-learning Reinforcement Learning (RL) +3

Guiding Generation for Abstractive Text Summarization Based on Key Information Guide Network

no code implementations NAACL 2018 Chenliang Li, Weiran Xu, Si Li, Sheng Gao

Then, we introduce a Key Information Guide Network (KIGN), which encodes the keywords to the key information representation, to guide the process of generation.

Abstractive Text Summarization

Finding Salient Context based on Semantic Matching for Relevance Ranking

no code implementations3 Sep 2019 Yuanyuan Qi, Jiayue Zhang, Weiran Xu, Jun Guo

In this paper, we propose a salient-context based semantic matching method to improve relevance ranking in information retrieval.

Information Retrieval Retrieval +2

Improving Abstractive Dialogue Summarization with Graph Structures and Topic Words

no code implementations COLING 2020 Lulu Zhao, Weiran Xu, Jun Guo

A masked graph self-attention mechanism is used to integrate cross-sentence information flows and focus more on the related utterances, which makes it better to understand the dialogue.

Abstractive Dialogue Summarization Graph Attention +1

Improving Abstractive Dialogue Summarization with Conversational Structure and Factual Knowledge

no code implementations1 Jan 2021 Lulu Zhao, Zeyuan Yang, Weiran Xu, Sheng Gao, Jun Guo

In this paper, we present a Knowledge Graph Enhanced Dual-Copy network (KGEDC), a novel framework for abstractive dialogue summarization with conversational structure and factual knowledge.

Abstractive Dialogue Summarization Sentence

Dynamically Disentangling Social Bias from Task-Oriented Representations with Adversarial Attack

1 code implementation NAACL 2021 LiWen Wang, Yuanmeng Yan, Keqing He, Yanan Wu, Weiran Xu

In this paper, we propose an adversarial disentangled debiasing model to dynamically decouple social bias attributes from the intermediate representations trained on the main task.

Adversarial Attack Representation Learning

TODSum: Task-Oriented Dialogue Summarization with State Tracking

no code implementations25 Oct 2021 Lulu Zhao, Fujia Zheng, Keqing He, Weihao Zeng, Yuejie Lei, Huixing Jiang, Wei Wu, Weiran Xu, Jun Guo, Fanyu Meng

Previous dialogue summarization datasets mainly focus on open-domain chitchat dialogues, while summarization datasets for the broadly used task-oriented dialogue haven't been explored yet.

InstructionNER: A Multi-Task Instruction-Based Generative Framework for Few-shot NER

1 code implementation8 Mar 2022 LiWen Wang, Rumei Li, Yang Yan, Yuanmeng Yan, Sirui Wang, Wei Wu, Weiran Xu

Recently, prompt-based methods have achieved significant performance in few-shot learning scenarios by bridging the gap between language model pre-training and fine-tuning for downstream tasks.

Entity Typing Few-Shot Learning +5

Domain-Oriented Prefix-Tuning: Towards Efficient and Generalizable Fine-tuning for Zero-Shot Dialogue Summarization

1 code implementation NAACL 2022 Lulu Zhao, Fujia Zheng, Weihao Zeng, Keqing He, Weiran Xu, Huixing Jiang, Wei Wu, Yanan Wu

The most advanced abstractive dialogue summarizers lack generalization ability on new domains and the existing researches for domain adaptation in summarization generally rely on large-scale pre-trainings.

Domain Adaptation

A Robust Contrastive Alignment Method For Multi-Domain Text Classification

no code implementations26 Apr 2022 Xuefeng Li, Hao Lei, LiWen Wang, Guanting Dong, Jinzheng Zhao, Jiachi Liu, Weiran Xu, Chunyun Zhang

In this paper, we propose a robust contrastive alignment method to align text classification features of various domains in the same feature space by supervised contrastive learning.

Contrastive Learning text-classification +1

Distribution Calibration for Out-of-Domain Detection with Bayesian Approximation

1 code implementation COLING 2022 Yanan Wu, Zhiyuan Zeng, Keqing He, Yutao Mou, Pei Wang, Weiran Xu

Out-of-Domain (OOD) detection is a key component in a task-oriented dialog system, which aims to identify whether a query falls outside the predefined supported intent set.

Out of Distribution (OOD) Detection

Semi-Supervised Knowledge-Grounded Pre-training for Task-Oriented Dialog Systems

1 code implementation17 Oct 2022 Weihao Zeng, Keqing He, Zechen Wang, Dayuan Fu, Guanting Dong, Ruotong Geng, Pei Wang, Jingang Wang, Chaobo Sun, Wei Wu, Weiran Xu

Recent advances in neural approaches greatly improve task-oriented dialogue (TOD) systems which assist users to accomplish their goals.

Disentangling Confidence Score Distribution for Out-of-Domain Intent Detection with Energy-Based Learning

no code implementations17 Oct 2022 Yanan Wu, Zhiyuan Zeng, Keqing He, Yutao Mou, Pei Wang, Yuanmeng Yan, Weiran Xu

In this paper, we propose a simple but strong energy-based score function to detect OOD where the energy scores of OOD samples are higher than IND samples.

Intent Detection Out of Distribution (OOD) Detection

Watch the Neighbors: A Unified K-Nearest Neighbor Contrastive Learning Framework for OOD Intent Discovery

1 code implementation17 Oct 2022 Yutao Mou, Keqing He, Pei Wang, Yanan Wu, Jingang Wang, Wei Wu, Weiran Xu

For OOD clustering stage, we propose a KCC method to form compact clusters by mining true hard negative samples, which bridges the gap between clustering and representation learning.

Clustering Contrastive Learning +3

UniNL: Aligning Representation Learning with Scoring Function for OOD Detection via Unified Neighborhood Learning

1 code implementation19 Oct 2022 Yutao Mou, Pei Wang, Keqing He, Yanan Wu, Jingang Wang, Wei Wu, Weiran Xu

Specifically, we design a K-nearest neighbor contrastive learning (KNCL) objective for representation learning and introduce a KNN-based scoring function for OOD detection.

Contrastive Learning Out of Distribution (OOD) Detection +2

Revisit Out-Of-Vocabulary Problem for Slot Filling: A Unified Contrastive Frameword with Multi-level Data Augmentations

no code implementations27 Feb 2023 Daichi Guo, Guanting Dong, Dayuan Fu, Yuxiang Wu, Chen Zeng, Tingfeng Hui, LiWen Wang, Xuefeng Li, Zechen Wang, Keqing He, Xinyue Cui, Weiran Xu

In real dialogue scenarios, the existing slot filling model, which tends to memorize entity patterns, has a significantly reduced generalization facing Out-of-Vocabulary (OOV) problems.

Contrastive Learning slot-filling +1

A Prototypical Semantic Decoupling Method via Joint Contrastive Learning for Few-Shot Name Entity Recognition

no code implementations27 Feb 2023 Guanting Dong, Zechen Wang, LiWen Wang, Daichi Guo, Dayuan Fu, Yuxiang Wu, Chen Zeng, Xuefeng Li, Tingfeng Hui, Keqing He, Xinyue Cui, QiXiang Gao, Weiran Xu

Specifically, we decouple class-specific prototypes and contextual semantic prototypes by two masking strategies to lead the model to focus on two different semantic information for inference.

Contrastive Learning few-shot-ner +4

Decoupling Pseudo Label Disambiguation and Representation Learning for Generalized Intent Discovery

1 code implementation28 May 2023 Yutao Mou, Xiaoshuai Song, Keqing He, Chen Zeng, Pei Wang, Jingang Wang, Yunsen Xian, Weiran Xu

Previous methods suffer from a coupling of pseudo label disambiguation and representation learning, that is, the reliability of pseudo labels relies on representation learning, and representation learning is restricted by pseudo labels in turn.

Intent Discovery Pseudo Label +1

Seen to Unseen: Exploring Compositional Generalization of Multi-Attribute Controllable Dialogue Generation

1 code implementation17 Jun 2023 Weihao Zeng, Lulu Zhao, Keqing He, Ruotong Geng, Jingang Wang, Wei Wu, Weiran Xu

In this paper, we explore the compositional generalization for multi-attribute controllable dialogue generation where a model can learn from seen attribute values and generalize to unseen combinations.

Attribute Dialogue Generation +1

Generative Zero-Shot Prompt Learning for Cross-Domain Slot Filling with Inverse Prompting

1 code implementation6 Jul 2023 Xuefeng Li, LiWen Wang, Guanting Dong, Keqing He, Jinzheng Zhao, Hao Lei, Jiachi Liu, Weiran Xu

Zero-shot cross-domain slot filling aims to transfer knowledge from the labeled source domain to the unlabeled target domain.

slot-filling Slot Filling

Bridging the KB-Text Gap: Leveraging Structured Knowledge-aware Pre-training for KBQA

1 code implementation28 Aug 2023 Guanting Dong, Rumei Li, Sirui Wang, Yupeng Zhang, Yunsen Xian, Weiran Xu

Knowledge Base Question Answering (KBQA) aims to answer natural language questions with factual information such as entities and relations in KBs.

Knowledge Base Question Answering Retrieval

Towards Robust and Generalizable Training: An Empirical Study of Noisy Slot Filling for Input Perturbations

no code implementations5 Oct 2023 Jiachi Liu, LiWen Wang, Guanting Dong, Xiaoshuai Song, Zechen Wang, Zhengyang Wang, Shanglin Lei, Jinzheng Zhao, Keqing He, Bo Xiao, Weiran Xu

The proposed dataset contains five types of human-annotated noise, and all those noises are exactly existed in real extensive robust-training methods of slot filling into the proposed framework.

slot-filling Slot Filling

Revisit Input Perturbation Problems for LLMs: A Unified Robustness Evaluation Framework for Noisy Slot Filling Task

1 code implementation10 Oct 2023 Guanting Dong, Jinxu Zhao, Tingfeng Hui, Daichi Guo, Wenlong Wan, Boqi Feng, Yueyan Qiu, Zhuoma Gongque, Keqing He, Zechen Wang, Weiran Xu

To address these challenges, we propose a unified robustness evaluation framework based on the slot-filling task to systematically evaluate the dialogue understanding capability of LLMs in diverse input perturbation scenarios.

Data Augmentation Dialogue Understanding +3

Large Language Models Meet Open-World Intent Discovery and Recognition: An Evaluation of ChatGPT

1 code implementation16 Oct 2023 Xiaoshuai Song, Keqing He, Pei Wang, Guanting Dong, Yutao Mou, Jingang Wang, Yunsen Xian, Xunliang Cai, Weiran Xu

The tasks of out-of-domain (OOD) intent discovery and generalized intent discovery (GID) aim to extend a closed intent classifier to open-world intent sets, which is crucial to task-oriented dialogue (TOD) systems.

In-Context Learning Intent Discovery

Semantic Parsing by Large Language Models for Intricate Updating Strategies of Zero-Shot Dialogue State Tracking

1 code implementation16 Oct 2023 Yuxiang Wu, Guanting Dong, Weiran Xu

Zero-shot Dialogue State Tracking (DST) addresses the challenge of acquiring and annotating task-oriented dialogues, which can be time-consuming and costly.

Dialogue State Tracking In-Context Learning +3

APP: Adaptive Prototypical Pseudo-Labeling for Few-shot OOD Detection

no code implementations20 Oct 2023 Pei Wang, Keqing He, Yutao Mou, Xiaoshuai Song, Yanan Wu, Jingang Wang, Yunsen Xian, Xunliang Cai, Weiran Xu

Detecting out-of-domain (OOD) intents from user queries is essential for a task-oriented dialogue system.

Knowledge Editing on Black-box Large Language Models

1 code implementation13 Feb 2024 Xiaoshuai Song, Zhengyang Wang, Keqing He, Guanting Dong, Yutao Mou, Jinxu Zhao, Weiran Xu

Knowledge editing (KE) aims to efficiently and precisely modify the behavior of large language models (LLMs) to update specific knowledge without negatively influencing other knowledge.

knowledge editing

PreAct: Predicting Future in ReAct Enhances Agent's Planning Ability

1 code implementation18 Feb 2024 Dayuan Fu, Jianzhao Huang, Siyuan Lu, Guanting Dong, Yejie Wang, Keqing He, Weiran Xu

Addressing the discrepancies between predictions and actual outcomes often aids individuals in expanding their thought processes and engaging in reflection, thereby facilitating reasoning in the correct direction.

Language Modelling Large Language Model

Noise-BERT: A Unified Perturbation-Robust Framework with Noise Alignment Pre-training for Noisy Slot Filling Task

no code implementations22 Feb 2024 Jinxu Zhao, Guanting Dong, Yueyan Qiu, Tingfeng Hui, Xiaoshuai Song, Daichi Guo, Weiran Xu

In this study, we address the challenges posed by input perturbations in slot filling by proposing Noise-BERT, a unified Perturbation-Robust Framework with Noise Alignment Pre-training.

Adversarial Attack Contrastive Learning +5

Beyond the Known: Investigating LLMs Performance on Out-of-Domain Intent Detection

no code implementations27 Feb 2024 Pei Wang, Keqing He, Yejie Wang, Xiaoshuai Song, Yutao Mou, Jingang Wang, Yunsen Xian, Xunliang Cai, Weiran Xu

Out-of-domain (OOD) intent detection aims to examine whether the user's query falls outside the predefined domain of the system, which is crucial for the proper functioning of task-oriented dialogue (TOD) systems.

Intent Detection Transfer Learning

Faceptor: A Generalist Model for Face Perception

3 code implementations14 Mar 2024 Lixiong Qin, Mei Wang, Xuannan Liu, Yuhang Zhang, Wei Deng, Xiaoshuai Song, Weiran Xu, Weihong Deng

This design enhances the unification of model structure while improving application efficiency in terms of storage overhead.

Age Estimation Attribute +3

DivTOD: Unleashing the Power of LLMs for Diversifying Task-Oriented Dialogue Representations

no code implementations31 Mar 2024 Weihao Zeng, Dayuan Fu, Keqing He, Yejie Wang, Yukai Xu, Weiran Xu

Language models pre-trained on general text have achieved impressive results in diverse fields.

Adversarial Semantic Decoupling for Recognizing Open-Vocabulary Slots

no code implementations EMNLP 2020 Yuanmeng Yan, Keqing He, Hong Xu, Sihong Liu, Fanyu Meng, Min Hu, Weiran Xu

Open-vocabulary slots, such as file name, album name, or schedule title, significantly degrade the performance of neural-based slot filling models since these slots can take on values from a virtually unlimited set and have no semantic restriction nor a length limit.

Sentence slot-filling +1

Give the Truth: Incorporate Semantic Slot into Abstractive Dialogue Summarization

no code implementations Findings (EMNLP) 2021 Lulu Zhao, Weihao Zeng, Weiran Xu, Jun Guo

Abstractive dialogue summarization suffers from a lots of factual errors, which are due to scattered salient elements in the multi-speaker information interaction process.

Abstractive Dialogue Summarization Contrastive Learning

Gradient-Based Adversarial Factual Consistency Evaluation for Abstractive Summarization

no code implementations EMNLP 2021 Zhiyuan Zeng, Jiaze Chen, Weiran Xu, Lei LI

Based on the artificial dataset, we train an evaluation model that can not only make accurate and robust factual consistency discrimination but is also capable of making interpretable factual errors tracing by backpropagated gradient distribution on token embeddings.

Abstractive Text Summarization Data Augmentation

Large-Scale Relation Learning for Question Answering over Knowledge Bases with Pre-trained Language Models

1 code implementation EMNLP 2021 Yuanmeng Yan, Rumei Li, Sirui Wang, Hongzhi Zhang, Zan Daoguang, Fuzheng Zhang, Wei Wu, Weiran Xu

The key challenge of question answering over knowledge bases (KBQA) is the inconsistency between the natural language questions and the reasoning paths in the knowledge base (KB).

Question Answering Relation +2

A Finer-grain Universal Dialogue Semantic Structures based Model For Abstractive Dialogue Summarization

no code implementations Findings (EMNLP) 2021 Yuejie Lei, Fujia Zheng, Yuanmeng Yan, Keqing He, Weiran Xu

Although abstractive summarization models have achieved impressive results on document summarization tasks, their performance on dialogue modeling is much less satisfactory due to the crude and straight methods for dialogue encoding.

Abstractive Dialogue Summarization Abstractive Text Summarization +1

Cannot find the paper you are looking for? You can Submit a new open access paper.