2 code implementations • 8 Apr 2024 • Shijie Xia, Xuefeng Li, Yixin Liu, Tongshuang Wu, PengFei Liu
To measure reasoning beyond final-answer accuracy, we introduce ReasonEval, a new methodology for evaluating the quality of reasoning steps.
no code implementations • 19 Mar 2024 • Pengchao Wu, Xuefeng Li, Jinghang Gu, Longhua Qian, Guodong Zhou
Biomedical event extraction is an information extraction task to obtain events from biomedical text, whose targets include the type, the trigger, and the respective arguments involved in an event.
1 code implementation • 19 Feb 2024 • Run-Ze Fan, Xuefeng Li, Haoyang Zou, Junlong Li, Shwai He, Ethan Chern, Jiewen Hu, PengFei Liu
This paper explores elevating the quality of existing instruction data to better align with human values, introducing a simple and effective approach named ReAlign, which reformats the responses of instruction data into a format that better aligns with pre-established criteria and the collated evidence.
1 code implementation • 6 Nov 2023 • Dongcheng Zou, Senzhang Wang, Xuefeng Li, Hao Peng, Yuandong Wang, Chunyang Liu, Kehua Sheng, Bo Zhang
Based on this, we propose a relative structural entropy-based position encoding and a multi-head attention masking scheme based on multi-layer encoding trees.
1 code implementation • 28 Aug 2023 • Guanting Dong, Zechen Wang, Jinxu Zhao, Gang Zhao, Daichi Guo, Dayuan Fu, Tingfeng Hui, Chen Zeng, Keqing He, Xuefeng Li, LiWen Wang, Xinyue Cui, Weiran Xu
The objective of few-shot named entity recognition is to identify named entities with limited labeled instances.
Ranked #1 on Few-shot NER on Few-NERD (INTER)
1 code implementation • 6 Jul 2023 • Xuefeng Li, LiWen Wang, Guanting Dong, Keqing He, Jinzheng Zhao, Hao Lei, Jiachi Liu, Weiran Xu
Zero-shot cross-domain slot filling aims to transfer knowledge from the labeled source domain to the unlabeled target domain.
no code implementations • 27 Feb 2023 • Guanting Dong, Zechen Wang, LiWen Wang, Daichi Guo, Dayuan Fu, Yuxiang Wu, Chen Zeng, Xuefeng Li, Tingfeng Hui, Keqing He, Xinyue Cui, QiXiang Gao, Weiran Xu
Specifically, we decouple class-specific prototypes and contextual semantic prototypes by two masking strategies to lead the model to focus on two different semantic information for inference.
no code implementations • 27 Feb 2023 • Daichi Guo, Guanting Dong, Dayuan Fu, Yuxiang Wu, Chen Zeng, Tingfeng Hui, LiWen Wang, Xuefeng Li, Zechen Wang, Keqing He, Xinyue Cui, Weiran Xu
In real dialogue scenarios, the existing slot filling model, which tends to memorize entity patterns, has a significantly reduced generalization facing Out-of-Vocabulary (OOV) problems.
no code implementations • COLING 2022 • Guanting Dong, Daichi Guo, LiWen Wang, Xuefeng Li, Zechen Wang, Chen Zeng, Keqing He, Jinzheng Zhao, Hao Lei, Xinyue Cui, Yi Huang, Junlan Feng, Weiran Xu
Most existing slot filling models tend to memorize inherent patterns of entities and corresponding contexts from training data.
no code implementations • 26 Apr 2022 • Xuefeng Li, Hao Lei, LiWen Wang, Guanting Dong, Jinzheng Zhao, Jiachi Liu, Weiran Xu, Chunyun Zhang
In this paper, we propose a robust contrastive alignment method to align text classification features of various domains in the same feature space by supervised contrastive learning.
1 code implementation • EMNLP 2021 • LiWen Wang, Xuefeng Li, Jiachi Liu, Keqing He, Yuanmeng Yan, Weiran Xu
Zero-shot cross-domain slot filling alleviates the data dependence in the case of data scarcity in the target domain, which has aroused extensive research.
no code implementations • 29 Sep 2021 • Yu Yao, Xuefeng Li, Tongliang Liu, Alan Blair, Mingming Gong, Bo Han, Gang Niu, Masashi Sugiyama
Existing methods for learning with noisy labels can be generally divided into two categories: (1) sample selection and label correction based on the memorization effect of neural networks; (2) loss correction with the transition matrix.
no code implementations • 23 Apr 2021 • Xuefeng Li, Alan Blair
Several regularization methods have recently been introduced which force the latent activations of an autoencoder or deep neural network to conform to either a Gaussian or hyperspherical distribution, or to minimize the implicit rank of the distribution in latent space.
1 code implementation • 4 Feb 2021 • Xuefeng Li, Tongliang Liu, Bo Han, Gang Niu, Masashi Sugiyama
In label-noise learning, the transition matrix plays a key role in building statistically consistent classifiers.
Ranked #14 on Learning with noisy labels on CIFAR-100N