no code implementations • EMNLP 2018 • Yimeng Zhuang, Jinghui Xie, Yinhe Zheng, Xuan Zhu
Most models for learning word embeddings are trained based on the context information of words, more precisely first order co-occurrence relations.
no code implementations • 27 Oct 2020 • Guanyi Chen, Yinhe Zheng, Yupei Du
Personalised response generation enables generating human-like responses by means of assigning the generator a social identity.
no code implementations • 6 Jun 2021 • Yinhe Zheng, Yida Wang, Pei Ke, Zhenyu Yang, Minlie Huang
This paper propose to combine pretrained language models with the modular dialogue paradigm for open-domain dialogue modeling.
no code implementations • 1 Nov 2021 • Rongsheng Zhang, Yinhe Zheng, Xiaoxi Mao, Minlie Huang
However, fine-tuning all the parameters of the PrLM on a small domain-specific corpus distort the learned generic knowledge, and it is also expensive to deployment a whole fine-tuned PrLM for each domain.
no code implementations • INLG (ACL) 2020 • Guanyi Chen, Yinhe Zheng, Yupei Du
Personalised response generation enables generating human-like responses by means of assigning the generator a social identity.
no code implementations • ACL 2022 • Yingxiu Zhao, Zhiliang Tian, Huaxiu Yao, Yinhe Zheng, Dongkyu Lee, Yiping Song, Jian Sun, Nevin L. Zhang
Building models of natural language processing (NLP) is challenging in low-resource scenarios where only limited data are available.
no code implementations • 23 Mar 2022 • Yequan Wang, Xuying Meng, Yiyi Liu, Aixin Sun, Yao Wang, Yinhe Zheng, Minlie Huang
These models hence are not optimized for dialog-level emotion detection, i. e. to predict the emotion category of a dialog as a whole.
no code implementations • 31 Aug 2022 • Yinhe Zheng
Specifically, we dedicate task-level prompts to capture task-specific knowledge to retain high LL performances and maintain instance-level prompts to learn knowledge shared across different input samples to improve the model's generalization performance.
no code implementations • 21 Sep 2022 • Sahand Sabour, Wen Zhang, Xiyao Xiao, Yuwei Zhang, Yinhe Zheng, Jiaxin Wen, Jialu Zhao, Minlie Huang
In this study, we analyze the effectiveness of Emohaa in reducing symptoms of mental distress.
no code implementations • COLING 2022 • Haoxiang Shi, Rongsheng Zhang, Jiaan Wang, Cen Wang, Yinhe Zheng, Tetsuya Sakai
Pre-trained Language Models (PLMs) are the cornerstone of the modern Natural Language Processing (NLP).
no code implementations • 10 Nov 2022 • Hao Lang, Yinhe Zheng, Jian Sun, Fei Huang, Luo Si, Yongbin Li
Out-of-Domain (OOD) intent detection is important for practical dialog systems.
no code implementations • 23 Feb 2023 • Yushan Qian, Bo wang, Ting-En Lin, Yinhe Zheng, Ying Zhu, Dongming Zhao, Yuexian Hou, Yuchuan Wu, Yongbin Li
Empathetic dialogue is a human-like behavior that requires the perception of both affective factors (e. g., emotion status) and cognitive factors (e. g., cause of the emotion).
no code implementations • 5 May 2023 • Hao Lang, Yinhe Zheng, Binyuan Hui, Fei Huang, Yongbin Li
Out-of-Domain (OOD) intent detection is vital for practical dialogue systems, and it usually requires considering multi-turn dialogue contexts.
no code implementations • 5 May 2023 • Hao Lang, Yinhe Zheng, Yixuan Li, Jian Sun, Fei Huang, Yongbin Li
Out-of-distribution (OOD) detection is essential for the reliable and safe deployment of machine learning systems in the real world.
Out-of-Distribution Detection Out of Distribution (OOD) Detection
no code implementations • 11 Aug 2023 • Tear Gosling, Alpin Dale, Yinhe Zheng
With the emergence of increasingly powerful large language models, there is a burgeoning interest in leveraging these models for casual conversation and role-play applications.
1 code implementation • ACL 2022 • Siyang Liu, Sahand Sabour, Yinhe Zheng, Pei Ke, Xiaoyan Zhu, Minlie Huang
We provide both empirical and theoretical evidence to show that our method effectively removes the biases existing in the original distinct score.
1 code implementation • EMNLP 2021 • Chen Henry Wu, Yinhe Zheng, Xiaoxi Mao, Minlie Huang
Grounded dialogue models generate responses that are grounded on certain concepts.
1 code implementation • 9 Sep 2019 • Yinhe Zheng, Guanyi Chen, Minlie Huang
Besides, we also demonstrate that the effectiveness of these pseudo OOD data can be further improved by efficiently utilizing unlabeled data.
Generative Adversarial Network Natural Language Understanding +2
1 code implementation • 24 May 2022 • Yinhe Zheng, Guanyi Chen
We have noticed that Marek et al. (2021) try to re-implement our paper Zheng et al. (2020a) in their work "OodGAN: Generative Adversarial Network for Out-of-Domain Data Generation".
Generative Adversarial Network Out of Distribution (OOD) Detection
1 code implementation • 16 Oct 2022 • Chujie Zheng, Jinfeng Zhou, Yinhe Zheng, Libiao Peng, Zhen Guo, Wenquan Wu, ZhengYu Niu, Hua Wu, Minlie Huang
Dialogue contradiction is a critical issue in open-domain dialogue systems.
1 code implementation • EMNLP 2020 • Rongsheng Zhang, Yinhe Zheng, Jianzhi Shao, Xiaoxi Mao, Yadong Xi, Minlie Huang
Further, a model-level distillation process is employed to distill a teacher model trained on high-quality paired data to augmented dialogue pairs, thereby preventing dialogue models from being affected by the noise in the augmented data.
1 code implementation • ACL 2021 • Yida Wang, Yinhe Zheng, Yong Jiang, Minlie Huang
Neural dialogue generation models trained with the one-hot target distribution suffer from the over-confidence issue, which leads to poor generation diversity as widely reported in the literature.
1 code implementation • 27 Sep 2020 • Yinhe Zheng, Zikai Chen, Rongsheng Zhang, Shilei Huang, Xiaoxi Mao, Minlie Huang
However, this task is far from well-explored due to the difficulties of rendering a particular style in coherent responses, especially when the target style is embedded only in unpaired texts that cannot be directly used to train the dialogue model.
2 code implementations • 12 Nov 2019 • Yinhe Zheng, Rongsheng Zhang, Xiaoxi Mao, Minlie Huang
Further, to incorporate the target persona in the decoding process and to balance its contribution, an attention routing structure is devised in the decoder to merge features extracted from the target persona and dialogue contexts using dynamically predicted weights.
1 code implementation • LREC 2022 • Yinhe Zheng, Guanyi Chen, Xin Liu, Jian Sun
To better investigate this issue, we manually annotate 100K dialogues from MMChat and further filter the corpus accordingly, which yields MMChat-hf.
1 code implementation • 29 Nov 2021 • Wanwei He, Yinpei Dai, Yinhe Zheng, Yuchuan Wu, Zheng Cao, Dermot Liu, Peng Jiang, Min Yang, Fei Huang, Luo Si, Jian Sun, Yongbin Li
Pre-trained models have proved to be powerful in enhancing task-oriented dialog systems.
Ranked #1 on End-To-End Dialogue Modelling on MULTIWOZ 2.0
3 code implementations • 28 Jan 2019 • Yinhe Zheng, Guanyi Chen, Minlie Huang, Song Liu, Xuan Zhu
In this paper, we investigate the problem of incorporating explicit personality traits in dialogue generation to deliver personalized dialogues.
2 code implementations • 3 Aug 2021 • Hao Zhou, Pei Ke, Zheng Zhang, Yuxian Gu, Yinhe Zheng, Chujie Zheng, Yida Wang, Chen Henry Wu, Hao Sun, Xiaocong Yang, Bosi Wen, Xiaoyan Zhu, Minlie Huang, Jie Tang
Although pre-trained language models have remarkably enhanced the generation ability of dialogue systems, open-domain Chinese dialogue systems are still limited by the dialogue data and the model size compared with English ones.
1 code implementation • 14 Oct 2022 • Yingxiu Zhao, Yinhe Zheng, Zhiliang Tian, Chang Gao, Bowen Yu, Haiyang Yu, Yongbin Li, Jian Sun, Nevin L. Zhang
Lifelong learning (LL) is vital for advanced task-oriented dialogue (ToD) systems.
1 code implementation • 23 Nov 2022 • Yingxiu Zhao, Yinhe Zheng, Bowen Yu, Zhiliang Tian, Dongkyu Lee, Jian Sun, Haiyang Yu, Yongbin Li, Nevin L. Zhang
In this paper, we explore a novel setting, semi-supervised lifelong language learning (SSLL), where a model learns sequentially arriving language tasks with both labeled and unlabeled data.
1 code implementation • 11 May 2023 • Yi Dai, Hao Lang, Yinhe Zheng, Fei Huang, Yongbin Li
A retrieve-then-rerank frame is further introduced to select in-context examples, which guild the LM to generate text that express knowledge for QA tasks.
1 code implementation • 11 May 2023 • Yi Dai, Hao Lang, Yinhe Zheng, Bowen Yu, Fei Huang, Yongbin Li
Specifically, we dedicate task-level prompts to capture task-specific knowledge to retain high LL performances and maintain instance-level prompts to learn knowledge shared across input samples to improve the model's generalization performance.
2 code implementations • 10 Aug 2020 • Yida Wang, Pei Ke, Yinhe Zheng, Kaili Huang, Yong Jiang, Xiaoyan Zhu, Minlie Huang
The cleaned dataset and the pre-training models will facilitate the research of short-text conversation modeling.