no code implementations • 22 Mar 2024 • Xiaobin Zhang, Liangjun Zang, Qianwen Liu, Shuchong Wei, Songlin Hu
With the rise of prompt engineering, it is important to design effective prompt templates and verbalizers to extract relevant knowledge.
1 code implementation • ICASSP 2022 • Xiaohui Song, Liangjun Zang, Rong Zhang, Songlin Hu, Longtao Huang
However, the spread impact of emotions in a conversation is rarely addressed in existing researches.
Ranked #14 on Emotion Recognition in Conversation on MELD
1 code implementation • ACL 2022 • Xing Wu, Chaochen Gao, Meng Lin, Liangjun Zang, Zhongyuan Wang, Songlin Hu
Before entering the neural network, a token is generally converted to the corresponding one-hot representation, which is a discrete distribution of the vocabulary.
1 code implementation • 10 Dec 2021 • Chaochen Gao, Xing Wu, Peng Wang, Jue Wang, Liangjun Zang, Zhongyuan Wang, Songlin Hu
To tackle that, we propose an effective knowledge distillation framework for contrastive sentence embeddings, termed DistilCSE.
2 code implementations • COLING 2022 • Xing Wu, Chaochen Gao, Liangjun Zang, Jizhong Han, Zhongyuan Wang, Songlin Hu
Unsup-SimCSE takes dropout as a minimal data augmentation method, and passes the same input sentence to a pre-trained Transformer encoder (with dropout turned on) twice to obtain the two corresponding embeddings to build a positive pair.
1 code implementation • 25 May 2020 • Dongjun Wei, Yaxin Liu, Fuqing Zhu, Liangjun Zang, Wei Zhou, Yijun Lu, Songlin Hu
In this paper, a novel integration method called AutoSUM is proposed for automatic feature extraction and multi-user preference simulation to overcome the drawbacks of previous methods.
no code implementations • 22 Feb 2020 • Xiaohui Song, Liangjun Zang, Yipeng Su, Xing Wu, Jizhong Han, Songlin Hu
While several state-of-the-art approaches to dialogue state tracking (DST) have shown promising performances on several benchmarks, there is still a significant performance gap between seen slot values (i. e., values that occur in both training set and test set) and unseen ones (values that occur in training set but not in test set).
no code implementations • 5 Sep 2019 • Xing Wu, Dongjun Wei, Liangjun Zang, Jizhong Han, Songlin Hu
Automatic and human evaluation results show that TransSent can generate structured sentences with high quality, and has certain scalability in different tasks.
no code implementations • 21 Aug 2019 • Xing Wu, Tao Zhang, Liangjun Zang, Jizhong Han, Songlin Hu
So we propose a two step approach "Mask and Infill".
2 code implementations • 25 May 2019 • Dongjun Wei, Yaxin Liu, Fuqing Zhu, Liangjun Zang, Wei Zhou, Jizhong Han, Songlin Hu
Entity summarization aims at creating brief but informative descriptions of entities from knowledge graphs.
5 code implementations • 17 Dec 2018 • Xing Wu, Shangwen Lv, Liangjun Zang, Jizhong Han, Songlin Hu
BERT demonstrates that a deep bidirectional language model is more powerful than either an unidirectional language model or the shallow concatenation of a forward and backward model.