no code implementations • COLING 2022 • Jiazhan Feng, Chongyang Tao, Zhen Li, Chang Liu, Tao Shen, Dongyan Zhao
In this paper, we propose a reciprocal learning approach to jointly optimize a knowledge retriever and a response ranker for knowledge-grounded response retrieval without ground-truth knowledge labels.
1 code implementation • ACL 2022 • Chang Liu, Chongyang Tao, Jiazhan Feng, Dongyan Zhao
Transferring the knowledge to a small model through distillation has raised great interest in recent years.
1 code implementation • 13 Nov 2023 • Hejing Cao, Zhenwei An, Jiazhan Feng, Kun Xu, Liwei Chen, Dongyan Zhao
While large language models exhibit remarkable performance in the Question Answering task, they are susceptible to hallucinations.
no code implementations • 10 Nov 2023 • Jiazhan Feng, Ruochen Xu, Junheng Hao, Hiteshi Sharma, Yelong Shen, Dongyan Zhao, Weizhu Chen
Despite their impressive performance, any parsing errors will inevitably result in the failure of the execution of the external logical solver and no answer to the logical questions.
no code implementations • 27 Sep 2023 • Xiaowen Sun, Jiazhan Feng, Yuxuan Wang, Yuxuan Lai, Xingyu Shen, Dongyan Zhao
In this paper, we focus on the innovative dialog-to-image generation task, where the model synthesizes a high-resolution image aligned with the given dialog context as a response.
2 code implementations • 12 May 2023 • Jiazhan Feng, Chongyang Tao, Xiubo Geng, Tao Shen, Can Xu, Guodong Long, Dongyan Zhao, Daxin Jiang
Information retrieval (IR) plays a crucial role in locating relevant resources from vast amounts of data, and its applications have evolved from traditional knowledge bases to modern retrieval models (RMs).
4 code implementations • 24 Apr 2023 • Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, Daxin Jiang
In this paper, we show an avenue for creating large amounts of instruction data with varying levels of complexity using LLM instead of humans.
1 code implementation • 10 Nov 2022 • Jiazhan Feng, Qingfeng Sun, Can Xu, Pu Zhao, Yaming Yang, Chongyang Tao, Dongyan Zhao, QIngwei Lin
First, it is the largest multi-modal conversation dataset by the number of dialogues by 88x.
Ranked #2 on Multimodal Intent Recognition on MMDialog
no code implementations • 1 Oct 2021 • Chongyang Tao, Jiazhan Feng, Chang Liu, Juntao Li, Xiubo Geng, Daxin Jiang
For this task, the adoption of pre-trained language models (such as BERT) has led to remarkable progress in a number of benchmarks.
1 code implementation • ACM Transactions on Information Systems 2021 • Ruijian Xu, Chongyang Tao, Jiazhan Feng, Wei Wu, Rui Yan, Dongyan Zhao
To tackle these challenges, we propose a representation[K]-interaction[L]-matching framework that explores multiple types of deep interactive representations to build context-response matching models for response selection.
no code implementations • ACL 2021 • Chongyang Tao, Changyu Chen, Jiazhan Feng, Ji-Rong Wen, Rui Yan
Recently, many studies are emerging towards building a retrieval-based dialogue system that is able to effectively leverage background knowledge (e. g., documents) when conversing with humans.
no code implementations • ACL 2019 • Jiazhan Feng, Chongyang Tao, Wei Wu, Yansong Feng, Dongyan Zhao, Rui Yan
Under the framework, we simultaneously learn two matching models with independent training sets.