no code implementations • 23 May 2022 • Tao Shen, Xiubo Geng, Chongyang Tao, Can Xu, Kai Zhang, Daxin Jiang
Large-scale retrieval is to recall relevant documents from a huge collection given a query.
no code implementations • 12 Apr 2022 • Qingfeng Sun, Can Xu, Huang Hu, Yujing Wang, Jian Miao, Xiubo Geng, Yining Chen, Fei Xu, Daxin Jiang
(2) How to cohere with context and preserve the knowledge when generating a stylized response.
1 code implementation • Findings (ACL) 2022 • Chao-Hong Tan, Jia-Chen Gu, Chongyang Tao, Zhen-Hua Ling, Can Xu, Huang Hu, Xiubo Geng, Daxin Jiang
To address the problem, we propose augmenting TExt Generation via Task-specific and Open-world Knowledge (TegTok) in a unified framework.
1 code implementation • ACL 2022 • YuFei Wang, Can Xu, Qingfeng Sun, Huang Hu, Chongyang Tao, Xiubo Geng, Daxin Jiang
This paper focuses on the Data Augmentation for low-resource Natural Language Understanding (NLU) tasks.
no code implementations • 28 Jan 2022 • Qiyu Wu, Chongyang Tao, Tao Shen, Can Xu, Xiubo Geng, Daxin Jiang
A straightforward solution is resorting to more diverse positives from a multi-augmenting strategy, while an open question remains about how to unsupervisedly learn from the diverse positives but with uneven augmenting qualities in the text field.
no code implementations • 26 Jan 2022 • Bo Chang, Can Xu, Matthieu Lê, Jingchen Feng, Ya Le, Sriraj Badam, Ed Chi, Minmin Chen
Recurrent recommender systems have been successful in capturing the temporal dynamics in users' activity trajectories.
no code implementations • 19 Nov 2021 • Yuntao Li, Can Xu, Huang Hu, Lei Sha, Yan Zhang, Daxin Jiang
The sequence representation plays a key role in the learning of matching degree between the dialogue context and the response.
no code implementations • ACL 2022 • Qingfeng Sun, Yujing Wang, Can Xu, Kai Zheng, Yaming Yang, Huang Hu, Fei Xu, Jessica Zhang, Xiubo Geng, Daxin Jiang
In such a low-resource setting, we devise a novel conversational agent, Divter, in order to isolate parameters that depend on multimodal dialogues from the entire generation model.
no code implementations • 14 Oct 2021 • Lingzhi Wang, Huang Hu, Lei Sha, Can Xu, Kam-Fai Wong, Daxin Jiang
In this paper, we present a pre-trained language model (PLM) based framework called RID for conversational recommender system (CRS).
1 code implementation • EMNLP 2021 • Zujie Liang, Huang Hu, Can Xu, Jian Miao, Yingying He, Yining Chen, Xiubo Geng, Fan Liang, Daxin Jiang
Second, only the items mentioned in the training corpus have a chance to be recommended in the conversation.
1 code implementation • ACL 2022 • Wei Chen, Yeyun Gong, Can Xu, Huang Hu, Bolun Yao, Zhongyu Wei, Zhihao Fan, Xiaowu Hu, Bartuer Zhou, Biao Cheng, Daxin Jiang, Nan Duan
We study the problem of coarse-grained response selection in retrieval-based dialogue systems.
no code implementations • Findings (EMNLP) 2021 • Feilong Chen, Xiuyi Chen, Can Xu, Daxin Jiang
Specifically, a posterior distribution over visual objects is inferred from both context (history and questions) and answers, and it ensures the appropriate grounding of visual objects during the training process.
no code implementations • NeurIPS 2021 • YuFei Wang, Can Xu, Huang Hu, Chongyang Tao, Stephen Wan, Mark Dras, Mark Johnson, Daxin Jiang
Sequence-to-Sequence (S2S) neural text generation models, especially the pre-trained ones (e. g., BART and T5), have exhibited compelling performance on various natural language generation tasks.
1 code implementation • ACL 2021 • Jia-Chen Gu, Chongyang Tao, Zhen-Hua Ling, Can Xu, Xiubo Geng, Daxin Jiang
Recently, various neural models for multi-party conversation (MPC) have achieved impressive improvements on a variety of tasks such as addressee recognition, speaker identification and response prediction.
1 code implementation • ACL 2021 • Zujie Liang, Huang Hu, Can Xu, Chongyang Tao, Xiubo Geng, Yining Chen, Fan Liang, Daxin Jiang
The retriever aims to retrieve a correlated image to the dialog from an image index, while the visual concept detector extracts rich visual knowledge from the image.
1 code implementation • ACL 2021 • Weizhen Qi, Yeyun Gong, Yu Yan, Can Xu, Bolun Yao, Bartuer Zhou, Biao Cheng, Daxin Jiang, Jiusheng Chen, Ruofei Zhang, Houqiang Li, Nan Duan
ProphetNet is a pre-training based natural language generation method which shows powerful performance on English text summarization and question generation tasks.
1 code implementation • 28 Jan 2021 • Can Xu, Ahmed M. Alaa, Ioana Bica, Brent D. Ershoff, Maxime Cannesson, Mihaela van der Schaar
Organ transplantation is often the last resort for treating end-stage illness, but the probability of a successful transplantation depends greatly on compatibility between donors and recipients.
no code implementations • 19 Nov 2020 • Yufan Zhao, Wei Wu, Can Xu
We study knowledge-grounded dialogue generation with pre-trained language models.
1 code implementation • EMNLP 2020 • Xueliang Zhao, Wei Wu, Can Xu, Chongyang Tao, Dongyan Zhao, Rui Yan
We study knowledge-grounded dialogue generation with pre-trained language models.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Ze Yang, Wei Wu, Can Xu, Xinnian Liang, Jiaqi Bai, Liran Wang, Wei Wang, Zhoujun Li
Generating responses following a desired style has great potentials to extend applications of open-domain dialogue systems, yet is refrained by lacking of parallel data for training.
1 code implementation • NeurIPS 2020 • Linxiao Li, Can Xu, Wei Wu, Yufan Zhao, Xueliang Zhao, Chongyang Tao
While neural conversation models have shown great potentials towards generating informative and engaging responses via introducing external knowledge, learning such a model often requires knowledge-grounded dialogues that are difficult to obtain.
no code implementations • EMNLP 2020 • Yufan Zhao, Can Xu, Wei Wu, Lei Yu
We study multi-turn response generation for open-domain dialogues.
no code implementations • 4 Apr 2020 • Ze Yang, Wei Wu, Huang Hu, Can Xu, Wei Wang, Zhoujun Li
Thus, we propose learning a response generation model with both image-grounded dialogues and textual dialogues by assuming that the visual scene information at the time of a conversation can be represented by an image, and trying to recover the latent images of the textual dialogues through text-to-image generation techniques.
no code implementations • ICLR 2020 • Xueliang Zhao, Wei Wu, Chongyang Tao, Can Xu, Dongyan Zhao, Rui Yan
In such a low-resource setting, we devise a disentangled response decoder in order to isolate parameters that depend on knowledge-grounded dialogues from the entire generation model.
no code implementations • 25 Dec 2019 • Yi Liu, Tianyu Liang, Can Xu, Xianwei Zhang, Xianhong Chen, Wei-Qiang Zhang, Liang He, Dandan song, Ruyun Li, Yangcheng Wu, Peng Ouyang, Shouyi Yin
This paper describes the systems submitted by the department of electronic engineering, institute of microelectronics of Tsinghua university and TsingMicro Co. Ltd. (THUEE) to the NIST 2019 speaker recognition evaluation CTS challenge.
no code implementations • EMNLP2019 2019 • Ze Yang, Can Xu, Wei Wu, Zhoujun Li
Automatic news comment generation is a new testbed for techniques of natural language generation.
1 code implementation • IJCNLP 2019 • Ze Yang, Wei Wu, Jian Yang, Can Xu, Zhoujun Li
Since the paired data now is no longer enough to train a neural generation model, we consider leveraging the large scale of unpaired data that are much easier to obtain, and propose response generation with both paired and unpaired data.
no code implementations • IJCNLP 2019 • Ze Yang, Can Xu, Wei Wu, Zhoujun Li
Automatic news comment generation is a new testbed for techniques of natural language generation.
1 code implementation • ACL 2019 • Chongyang Tao, Wei Wu, Can Xu, Wenpeng Hu, Dongyan Zhao, Rui Yan
Currently, researchers have paid great attention to retrieval-based dialogues in open-domain.
Ranked #8 on
Conversational Response Selection
on Douban
no code implementations • ACL 2019 • Can Xu, Wei Wu, Chongyang Tao, Huang Hu, Matt Schuerman, Ying Wang
We present open domain response generation with meta-words.
no code implementations • 11 Jun 2019 • Xueliang Zhao, Chongyang Tao, Wei Wu, Can Xu, Dongyan Zhao, Rui Yan
We present a document-grounded matching network (DGMN) for response selection that can power a knowledge-aware retrieval-based chatbot system.
no code implementations • 22 Feb 2019 • Jiaxi Tang, Francois Belletti, Sagar Jain, Minmin Chen, Alex Beutel, Can Xu, Ed H. Chi
Our approach employs a mixture of models, each with a different temporal range.
2 code implementations • 25 Aug 2018 • Liang He, Xianhong Chen, Can Xu, Jia Liu
Most current state-of-the-art text-independent speaker verification systems take probabilistic linear discriminant analysis (PLDA) as their backend classifiers.
Multiobjective Optimization
Text-Independent Speaker Verification
1 code implementation • EMNLP 2018 • Huang Hu, Xianchao Wu, Bingfeng Luo, Chongyang Tao, Can Xu, Wei Wu, Zhan Chen
The 20 Questions (Q20) game is a well known game which encourages deductive reasoning and creativity.
no code implementations • 22 Aug 2018 • Chongyang Tao, Wei Wu, Can Xu, Yansong Feng, Dongyan Zhao, Rui Yan
In this paper, we study context-response matching with pre-trained contextualized representations for multi-turn response selection in retrieval-based chatbots.
no code implementations • 19 Jul 2018 • Can Xu, Wei Wu, Yu Wu
We study open domain dialogue generation with dialogue acts designed to explain how people engage in social chat.
no code implementations • ICLR 2018 • Wei Wu, Can Xu, Yu Wu, Zhoujun Li
Conventional methods model open domain dialogue generation as a black box through end-to-end learning from large scale conversation data.
no code implementations • 30 Nov 2017 • Yu Wu, Wei Wu, Dejian Yang, Can Xu, Zhoujun Li, Ming Zhou
We study response generation for open domain conversation in chatbots.
no code implementations • CL 2019 • Yu Wu, Wei Wu, Chen Xing, Can Xu, Zhoujun Li, Ming Zhou
The task requires matching a response candidate with a conversation context, whose challenges include how to recognize important parts of the context, and how to model the relationships among utterances in the context.
no code implementations • NeurIPS 2016 • Mohammad Saberian, Jose Costa Pereira, Can Xu, Jian Yang, Nuno Nvasconcelos
We argue that the intermediate mapping, e. g. boosting predictor, is preserving the discriminant aspects of the data and by controlling the dimension of this mapping it is possible to achieve discriminant low dimensional representations for the data.
no code implementations • 21 Nov 2014 • Can Xu, Suleyman Cetintas, Kuang-Chih Lee, Li-Jia Li
Images have become one of the most popular types of media through which users convey their emotions within online social networks.
no code implementations • CVPR 2014 • Can Xu, Nuno Vasconcelos
A new method for learning pooling receptive fields for recognition is presented.