Search Results for author: LiWen Wang

Found 8 papers, 3 papers with code

AISFG: Abundant Information Slot Filling Generator

no code implementations NAACL 2022 Yang Yan, Junda Ye, Zhongbao Zhang, LiWen Wang

As an essential component of task-oriented dialogue systems, slot filling requires enormous labeled training data in a certain domain.

Few-Shot Learning slot-filling +2

Revisit Out-Of-Vocabulary Problem for Slot Filling: A Unified Contrastive Frameword with Multi-level Data Augmentations

no code implementations27 Feb 2023 Daichi Guo, Guanting Dong, Dayuan Fu, Yuxiang Wu, Chen Zeng, Tingfeng Hui, LiWen Wang, Xuefeng Li, Zechen Wang, Keqing He, Xinyue Cui, Weiran Xu

In real dialogue scenarios, the existing slot filling model, which tends to memorize entity patterns, has a significantly reduced generalization facing Out-of-Vocabulary (OOV) problems.

Contrastive Learning slot-filling +1

A Prototypical Semantic Decoupling Method via Joint Contrastive Learning for Few-Shot Name Entity Recognition

no code implementations27 Feb 2023 Guanting Dong, Zechen Wang, LiWen Wang, Daichi Guo, Dayuan Fu, Yuxiang Wu, Chen Zeng, Xuefeng Li, Tingfeng Hui, Keqing He, Xinyue Cui, QiXiang Gao, Weiran Xu

Specifically, we decouple class-specific prototypes and contextual semantic prototypes by two masking strategies to lead the model to focus on two different semantic information for inference.

Contrastive Learning few-shot-ner +4

A Robust Contrastive Alignment Method For Multi-Domain Text Classification

no code implementations26 Apr 2022 Xuefeng Li, Hao Lei, LiWen Wang, Guanting Dong, Jinzheng Zhao, Jiachi Liu, Weiran Xu, Chunyun Zhang

In this paper, we propose a robust contrastive alignment method to align text classification features of various domains in the same feature space by supervised contrastive learning.

Contrastive Learning text-classification +1

InstructionNER: A Multi-Task Instruction-Based Generative Framework for Few-shot NER

1 code implementation8 Mar 2022 LiWen Wang, Rumei Li, Yang Yan, Yuanmeng Yan, Sirui Wang, Wei Wu, Weiran Xu

Recently, prompt-based methods have achieved significant performance in few-shot learning scenarios by bridging the gap between language model pre-training and fine-tuning for downstream tasks.

Entity Typing Few-Shot Learning +4

Dynamically Disentangling Social Bias from Task-Oriented Representations with Adversarial Attack

1 code implementation NAACL 2021 LiWen Wang, Yuanmeng Yan, Keqing He, Yanan Wu, Weiran Xu

In this paper, we propose an adversarial disentangled debiasing model to dynamically decouple social bias attributes from the intermediate representations trained on the main task.

Adversarial Attack Representation Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.