no code implementations • ACL 2022 • Tingchen Fu, Xueliang Zhao, Chongyang Tao, Ji-Rong Wen, Rui Yan
Knowledge-grounded conversation (KGC) shows great potential in building an engaging and knowledgeable chatbot, and knowledge selection is a key ingredient in it.
1 code implementation • ACL 2022 • Chang Liu, Chongyang Tao, Jiazhan Feng, Dongyan Zhao
Transferring the knowledge to a small model through distillation has raised great interest in recent years.
no code implementations • ACL 2022 • Chang Liu, Xu Tan, Chongyang Tao, Zhenxin Fu, Dongyan Zhao, Tie-Yan Liu, Rui Yan
To enable the chatbot to foresee the dialogue future, we design a beam-search-like roll-out strategy for dialogue future simulation using a typical dialogue generation model and a dialogue selector.
no code implementations • 23 May 2022 • Tao Shen, Xiubo Geng, Chongyang Tao, Can Xu, Kai Zhang, Daxin Jiang
Large-scale retrieval is to recall relevant documents from a huge collection given a query.
no code implementations • NeurIPS 2021 • Xueliang Zhao, Tingchen Fu, Chongyang Tao, Wei Wu, Dongyan Zhao, Rui Yan
Grounding dialogue generation by extra knowledge has shown great potentials towards building a system capable of replying with knowledgeable and engaging responses.
1 code implementation • 6 Apr 2022 • Tingchen Fu, Xueliang Zhao, Chongyang Tao, Ji-Rong Wen, Rui Yan
In this work, we introduce personal memory into knowledge selection in KGC to address the personalization issue.
1 code implementation • ACL 2022 • Jia-Chen Gu, Chao-Hong Tan, Chongyang Tao, Zhen-Hua Ling, Huang Hu, Xiubo Geng, Daxin Jiang
To address these challenges, we present HeterMPC, a heterogeneous graph-based neural network for response generation in MPCs which models the semantics of utterances and interlocutors simultaneously with two types of nodes in a graph.
1 code implementation • Findings (ACL) 2022 • Chao-Hong Tan, Jia-Chen Gu, Chongyang Tao, Zhen-Hua Ling, Can Xu, Huang Hu, Xiubo Geng, Daxin Jiang
To address the problem, we propose augmenting TExt Generation via Task-specific and Open-world Knowledge (TegTok) in a unified framework.
1 code implementation • ACL 2022 • YuFei Wang, Can Xu, Qingfeng Sun, Huang Hu, Chongyang Tao, Xiubo Geng, Daxin Jiang
This paper focuses on the Data Augmentation for low-resource Natural Language Understanding (NLU) tasks.
no code implementations • 28 Jan 2022 • Qiyu Wu, Chongyang Tao, Tao Shen, Can Xu, Xiubo Geng, Daxin Jiang
A straightforward solution is resorting to more diverse positives from a multi-augmenting strategy, while an open question remains about how to unsupervisedly learn from the diverse positives but with uneven augmenting qualities in the text field.
no code implementations • 1 Oct 2021 • Chongyang Tao, Jiazhan Feng, Chang Liu, Juntao Li, Xiubo Geng, Daxin Jiang
For this task, the adoption of pre-trained language models (such as BERT) has led to remarkable progress in a number of benchmarks.
1 code implementation • ACM Transactions on Information Systems 2021 • Ruijian Xu, Chongyang Tao, Jiazhan Feng, Wei Wu, Rui Yan, Dongyan Zhao
To tackle these challenges, we propose a representation[K]-interaction[L]-matching framework that explores multiple types of deep interactive representations to build context-response matching models for response selection.
no code implementations • ACL 2021 • Chongyang Tao, Changyu Chen, Jiazhan Feng, Ji-Rong Wen, Rui Yan
Recently, many studies are emerging towards building a retrieval-based dialogue system that is able to effectively leverage background knowledge (e. g., documents) when conversing with humans.
no code implementations • NeurIPS 2021 • YuFei Wang, Can Xu, Huang Hu, Chongyang Tao, Stephen Wan, Mark Dras, Mark Johnson, Daxin Jiang
Sequence-to-Sequence (S2S) neural text generation models, especially the pre-trained ones (e. g., BART and T5), have exhibited compelling performance on various natural language generation tasks.
1 code implementation • ACL 2021 • Jia-Chen Gu, Chongyang Tao, Zhen-Hua Ling, Can Xu, Xiubo Geng, Daxin Jiang
Recently, various neural models for multi-party conversation (MPC) have achieved impressive improvements on a variety of tasks such as addressee recognition, speaker identification and response prediction.
no code implementations • NAACL 2021 • Chongyang Tao, Shen Gao, Juntao Li, Yansong Feng, Dongyan Zhao, Rui Yan
Sequential information, a. k. a., orders, is assumed to be essential for processing a sequence with recurrent neural network or convolutional neural network based encoders.
1 code implementation • ACL 2021 • Zujie Liang, Huang Hu, Can Xu, Chongyang Tao, Xiubo Geng, Yining Chen, Fan Liang, Daxin Jiang
The retriever aims to retrieve a correlated image to the dialog from an image index, while the visual concept detector extracts rich visual knowledge from the image.
no code implementations • 7 May 2021 • Binbin Xu, Chongyang Tao, Zidu Feng, Youssef Raqui, Sylvie Ranwez
This study presents a large scale benchmarking on cloud based Speech-To-Text systems: {Google Cloud Speech-To-Text}, {Microsoft Azure Cognitive Services}, {Amazon Transcribe}, {IBM Watson Speech to Text}.
no code implementations • 17 Mar 2021 • Juntao Li, Chang Liu, Chongyang Tao, Zhangming Chan, Dongyan Zhao, Min Zhang, Rui Yan
To fill the gap between these up-to-date methods and the real-world applications, we incorporate user-specific dialogue history into the response selection and propose a personalized hybrid matching network (PHMN).
1 code implementation • EMNLP 2020 • Xueliang Zhao, Wei Wu, Can Xu, Chongyang Tao, Dongyan Zhao, Rui Yan
We study knowledge-grounded dialogue generation with pre-trained language models.
no code implementations • 14 Sep 2020 • Ruijian Xu, Chongyang Tao, Daxin Jiang, Xueliang Zhao, Dongyan Zhao, Rui Yan
To address these issues, in this paper, we propose learning a context-response matching model with auxiliary self-supervised tasks designed for the dialogue data based on pre-trained language models.
Ranked #2 on
Conversational Response Selection
on E-commerce
1 code implementation • NeurIPS 2020 • Linxiao Li, Can Xu, Wei Wu, Yufan Zhao, Xueliang Zhao, Chongyang Tao
While neural conversation models have shown great potentials towards generating informative and engaging responses via introducing external knowledge, learning such a model often requires knowledge-grounded dialogues that are difficult to obtain.
no code implementations • 30 Apr 2020 • Jiayi Zhang, Chongyang Tao, Zhenjing Xu, Qiaojing Xie, Wei Chen, Rui Yan
Aiming at generating responses that approximate the ground-truth and receive high ranking scores from the discriminator, the two generators learn to generate improved highly relevant responses and competitive unobserved candidates respectively, while the discriminative ranker is trained to identify true responses from adversarial ones, thus featuring the merits of both generator counterparts.
no code implementations • ICLR 2020 • Xueliang Zhao, Wei Wu, Chongyang Tao, Can Xu, Dongyan Zhao, Rui Yan
In such a low-resource setting, we devise a disentangled response decoder in order to isolate parameters that depend on knowledge-grounded dialogues from the entire generation model.
no code implementations • IJCNLP 2019 • Jia Li, Chongyang Tao, Wei Wu, Yansong Feng, Dongyan Zhao, Rui Yan
We study how to sample negative examples to automatically construct a training set for effective model learning in retrieval-based dialogue systems.
1 code implementation • ACL 2019 • Chongyang Tao, Wei Wu, Can Xu, Wenpeng Hu, Dongyan Zhao, Rui Yan
Currently, researchers have paid great attention to retrieval-based dialogues in open-domain.
Ranked #8 on
Conversational Response Selection
on Douban
no code implementations • 18 Jun 2019 • Xiaoye Tan, Rui Yan, Chongyang Tao, Mingrui Wu
Considering that words with different characteristic in the text have different importance for classification, grouping them together separately can strengthen the semantic expression of each part.
no code implementations • ACL 2019 • Can Xu, Wei Wu, Chongyang Tao, Huang Hu, Matt Schuerman, Ying Wang
We present open domain response generation with meta-words.
no code implementations • 11 Jun 2019 • Xueliang Zhao, Chongyang Tao, Wei Wu, Can Xu, Dongyan Zhao, Rui Yan
We present a document-grounded matching network (DGMN) for response selection that can power a knowledge-aware retrieval-based chatbot system.
no code implementations • ACL 2019 • Jiazhan Feng, Chongyang Tao, Wei Wu, Yansong Feng, Dongyan Zhao, Rui Yan
Under the framework, we simultaneously learn two matching models with independent training sets.
no code implementations • ICLR 2019 • Wenpeng Hu, Zhou Lin, Bing Liu, Chongyang Tao, Zhengwei Tao, Jinwen Ma, Dongyan Zhao, Rui Yan
Several continual learning methods have been proposed to address the problem.
1 code implementation • EMNLP 2018 • Xiuying Chen, Shen Gao, Chongyang Tao, Yan Song, Dongyan Zhao, Rui Yan
In this paper, we introduce Iterative Text Summarization (ITS), an iteration-based model for supervised extractive text summarization, inspired by the observation that it is often necessary for a human to read an article multiple times in order to fully understand and summarize its contents.
Ranked #13 on
Extractive Text Summarization
on CNN / Daily Mail
1 code implementation • EMNLP 2018 • Huang Hu, Xianchao Wu, Bingfeng Luo, Chongyang Tao, Can Xu, Wei Wu, Zhan Chen
The 20 Questions (Q20) game is a well known game which encourages deductive reasoning and creativity.
no code implementations • 22 Aug 2018 • Chongyang Tao, Wei Wu, Can Xu, Yansong Feng, Dongyan Zhao, Rui Yan
In this paper, we study context-response matching with pre-trained contextualized representations for multi-turn response selection in retrieval-based chatbots.
no code implementations • ICLR 2018 • Ning Miao, Hengliang Wang, Ran Le, Chongyang Tao, Mingyue Shang, Rui Yan, Dongyan Zhao
Traditional recurrent neural network (RNN) or convolutional neural net- work (CNN) based sequence-to-sequence model can not handle tree structural data well.
1 code implementation • 11 Jan 2017 • Chongyang Tao, Lili Mou, Dongyan Zhao, Rui Yan
Open-domain human-computer conversation has been attracting increasing attention over the past few years.