1 code implementation • 28 May 2023 • Yutao Mou, Xiaoshuai Song, Keqing He, Chen Zeng, Pei Wang, Jingang Wang, Yunsen Xian, Weiran Xu
Previous methods suffer from a coupling of pseudo label disambiguation and representation learning, that is, the reliability of pseudo labels relies on representation learning, and representation learning is restricted by pseudo labels in turn.
no code implementations • 26 May 2023 • Jiduan Liu, Jiahao Liu, Qifan Wang, Jingang Wang, Wei Wu, Yunsen Xian, Dongyan Zhao, Kai Chen, Rui Yan
In this paper, we propose a novel approach, RankCSE, for unsupervised sentence representation learning, which incorporates ranking consistency and ranking distillation with contrastive learning into a unified framework.
no code implementations • 21 May 2023 • Chen Zhang, Yang Yang, Jingang Wang, Dawei Song
Finetuning pretrained language models (LMs) have enabled appealing performance on a diverse array of tasks.
1 code implementation • 20 May 2023 • Chen Zhang, Yang Yang, Jiahao Liu, Jingang Wang, Yunsen Xian, Benyou Wang, Dawei Song
However, when the capacity gap between the teacher and the student is large, a curse of capacity gap appears, invoking a deficiency in distilling LMs.
no code implementations • 20 Mar 2023 • Ying Mo, Hongyin Tang, Jiahao Liu, Qifan Wang, Zenglin Xu, Jingang Wang, Wei Wu, Zhoujun Li
There are three types of NER tasks, including flat, nested and discontinuous entity recognition.
no code implementations • 15 Dec 2022 • Liqi Yan, Qifan Wang, Siqi Ma, Jingang Wang, Changbin Yu
Instance segmentation in videos, which aims to segment and track multiple objects in video frames, has garnered a flurry of research attention in recent years.
1 code implementation • 19 Oct 2022 • Yutao Mou, Pei Wang, Keqing He, Yanan Wu, Jingang Wang, Wei Wu, Weiran Xu
Specifically, we design a K-nearest neighbor contrastive learning (KNCL) objective for representation learning and introduce a KNN-based scoring function for OOD detection.
1 code implementation • 17 Oct 2022 • Weihao Zeng, Keqing He, Zechen Wang, Dayuan Fu, Guanting Dong, Ruotong Geng, Pei Wang, Jingang Wang, Chaobo Sun, Wei Wu, Weiran Xu
Recent advances in neural approaches greatly improve task-oriented dialogue (TOD) systems which assist users to accomplish their goals.
1 code implementation • 17 Oct 2022 • Yutao Mou, Keqing He, Pei Wang, Yanan Wu, Jingang Wang, Wei Wu, Weiran Xu
For OOD clustering stage, we propose a KCC method to form compact clusters by mining true hard negative samples, which bridges the gap between clustering and representation learning.
no code implementations • 10 Oct 2022 • Fang Ma, Chen Zhang, Lei Ren, Jingang Wang, Qifan Wang, Wei Wu, Xiaojun Quan, Dawei Song
Prompt tuning learns soft prompts to condition frozen Pre-trained Language Models (PLMs) for performing downstream tasks in a parameter-efficient manner.
1 code implementation • COLING 2022 • Yutao Mou, Keqing He, Yanan Wu, Pei Wang, Jingang Wang, Wei Wu, Yi Huang, Junlan Feng, Weiran Xu
Traditional intent classification models are based on a pre-defined intent set and only recognize limited in-domain (IND) intent classes.
1 code implementation • COLING 2022 • Chen Zhang, Lei Ren, Fang Ma, Jingang Wang, Wei Wu, Dawei Song
Thus, a natural question arises: Is structural bias still a necessity in the context of PLMs?
no code implementations • 31 Aug 2022 • Keqing He, Jingang Wang, Chaobo Sun, Wei Wu
In this paper, we propose a novel unified knowledge prompt pre-training framework, UFA (\textbf{U}nified Model \textbf{F}or \textbf{A}ll Tasks), for customer service dialogues.
no code implementations • COLING 2022 • Borun Chen, Hongyin Tang, Jiahao Bu, Kai Zhang, Jingang Wang, Qifan Wang, Hai-Tao Zheng, Wei Wu, Liqian Yu
However, most current models use Chinese characters as inputs and are not able to encode semantic information contained in Chinese words.
no code implementations • 29 May 2022 • Chen Zhang, Yang Yang, Qifan Wang, Jiahao Liu, Jingang Wang, Yunsen Xian, Wei Wu, Dawei Song
In particular, motivated by the finding that the performance of the student is positively correlated to the scale-performance tradeoff of the teacher assistant, MiniDisc is designed with a $\lambda$-tradeoff to measure the optimality of the teacher assistant without trial distillation to the student.
1 code implementation • 11 May 2022 • Chen Zhang, Lei Ren, Jingang Wang, Wei Wu, Dawei Song
Prompt-tuning has shown appealing performance in few-shot classification by virtue of its capability in effectively exploiting pre-trained knowledge.
no code implementations • 18 Apr 2022 • Jiduan Liu, Jiahao Liu, Yang Yang, Jingang Wang, Wei Wu, Dongyan Zhao, Rui Yan
To enhance the performance of dense retrieval models without loss of efficiency, we propose a GNN-encoder model in which query (passage) information is fused into passage (query) representations via graph neural networks that are constructed by queries and their top retrieved passages.
no code implementations • 5 Mar 2022 • Qifan Wang, Yi Fang, Anirudh Ravula, Ruining He, Bin Shen, Jingang Wang, Xiaojun Quan, Dongfang Liu
Network embedding is an effective technique to learn the low-dimensional representations of nodes in networks.
no code implementations • 8 Dec 2021 • Dan Li, Yang Yang, Hongyin Tang, Jingang Wang, Tong Xu, Wei Wu, Enhong Chen
With the booming of pre-trained transformers, representation-based models based on Siamese transformer encoders have become mainstream techniques for efficient text matching.
2 code implementations • ACL 2022 • Shengding Hu, Ning Ding, Huadong Wang, Zhiyuan Liu, Jingang Wang, Juanzi Li, Wei Wu, Maosong Sun
Tuning pre-trained language models (PLMs) with task-specific prompts has been a promising approach for text classification.
no code implementations • ACL 2021 • Hongyin Tang, Xingwu Sun, Beihong Jin, Jingang Wang, Fuzheng Zhang, Wei Wu
Recently, the retrieval models based on dense representations have been gradually applied in the first stage of the document retrieval tasks, showing better performance than traditional sparse vector space models.
1 code implementation • NAACL 2021 • Jiahao Bu, Lei Ren, Shuang Zheng, Yang Yang, Jingang Wang, Fuzheng Zhang, Wei Wu
Aspect category sentiment analysis (ACSA) and review rating prediction (RP) are two essential tasks to detect the fine-to-coarse sentiment polarities.
no code implementations • 19 Oct 2020 • Yang Yang, Junmei Hao, Canjia Li, Zili Wang, Jingang Wang, Fuzheng Zhang, Rao Fu, Peixu Hou, Gong Zhang, Zhongyuan Wang
Existing work on tip generation does not take query into consideration, which limits the impact of tips in search scenarios.
no code implementations • 19 May 2019 • Bowen Xing, Lejian Liao, Dandan song, Jingang Wang, Fuzheng Zhang, Zhongyuan Wang, He-Yan Huang
This paper proposes a novel variant of LSTM, termed as aspect-aware LSTM (AA-LSTM), which incorporates aspect information into LSTM cells in the context modeling stage before the attention mechanism.
no code implementations • WS 2018 • Yongchao Deng, Shanbo Cheng, Jun Lu, Kai Song, Jingang Wang, Shenglan Wu, Liang Yao, Guchun Zhang, Haibo Zhang, Pei Zhang, Changfeng Zhu, Boxing Chen
We participated in 5 translation directions including English ↔ Russian, English ↔ Turkish in both directions and English → Chinese.
no code implementations • 5 Jan 2018 • Jingang Wang, Junfeng Tian, Long Qiu, Sheng Li, Jun Lang, Luo Si, Man Lan
It is a challenging and practical research problem to obtain effective compression of lengthy product titles for E-commerce.