no code implementations • ACL 2022 • Tingchen Fu, Xueliang Zhao, Chongyang Tao, Ji-Rong Wen, Rui Yan
Knowledge-grounded conversation (KGC) shows great potential in building an engaging and knowledgeable chatbot, and knowledge selection is a key ingredient in it.
no code implementations • 22 Oct 2024 • Qintong Li, Jiahui Gao, Sheng Wang, Renjie Pi, Xueliang Zhao, Chuan Wu, Xin Jiang, Zhenguo Li, Lingpeng Kong
In this paper, we present a novel approach, ReverseGen, designed to automatically generate effective training samples that expose the weaknesses of LLMs.
1 code implementation • 20 Aug 2024 • Xueliang Zhao, Lin Zheng, Haige Bo, Changran Hu, Urmish Thakker, Lingpeng Kong
This paper introduces SubgoalXL, a novel approach that synergizes subgoal-based proofs with expert learning to enhance LLMs' capabilities in formal theorem proving within the Isabelle environment.
1 code implementation • 29 Feb 2024 • Qintong Li, Leyang Cui, Xueliang Zhao, Lingpeng Kong, Wei Bi
Large language models (LLMs) have achieved impressive performance across various mathematical reasoning benchmarks.
Ranked #1 on Math Word Problem Solving on GSM-Plus
no code implementations • 21 Feb 2024 • Xueliang Zhao, Xinting Huang, Tingchen Fu, Qintong Li, Shansan Gong, Lemao Liu, Wei Bi, Lingpeng Kong
Multimodal reasoning stands as a pivotal capability for large vision-language models (LVLMs).
no code implementations • 19 Oct 2023 • Xueliang Zhao, Xinting Huang, Wei Bi, Lingpeng Kong
Large Language Models (LLMs) have driven substantial progress in artificial intelligence in recent years, exhibiting impressive capabilities across a wide range of tasks, including mathematical problem-solving.
1 code implementation • 30 May 2023 • Yuxuan Wang, Zilong Zheng, Xueliang Zhao, Jinpeng Li, Yueqian Wang, Dongyan Zhao
Video-grounded dialogue understanding is a challenging problem that requires machine to perceive, parse and reason over situated semantics extracted from weakly aligned video and dialogues.
1 code implementation • 25 May 2023 • Xueliang Zhao, Wenda Li, Lingpeng Kong
Large language models~(LLMs) present an intriguing avenue of exploration in the domain of formal theorem proving.
Ranked #3 on Automated Theorem Proving on miniF2F-test (Pass@100 metric)
no code implementations • 22 Oct 2022 • Xueliang Zhao, Yuxuan Wang, Chongyang Tao, Chenshuo Wang, Dongyan Zhao
We study video-grounded dialogue generation, where a response is generated based on the dialogue context and the associated video.
no code implementations • 22 Oct 2022 • Xueliang Zhao, Lemao Liu, Tingchen Fu, Shuming Shi, Dongyan Zhao, Rui Yan
With the availability of massive general-domain dialogue data, pre-trained dialogue generation appears to be super appealing to transfer knowledge from the general domain to downstream applications.
no code implementations • 22 Oct 2022 • Xueliang Zhao, Tingchen Fu, Chongyang Tao, Rui Yan
Knowledge-grounded conversation (KGC) shows excellent potential to deliver an engaging and informative response.
no code implementations • NAACL 2022 • Xueliang Zhao, Tingchen Fu, Chongyang Tao, Wei Wu, Dongyan Zhao, Rui Yan
Grounding dialogue generation by extra knowledge has shown great potentials towards building a system capable of replying with knowledgeable and engaging responses.
1 code implementation • 6 Apr 2022 • Tingchen Fu, Xueliang Zhao, Chongyang Tao, Ji-Rong Wen, Rui Yan
In this work, we introduce personal memory into knowledge selection in KGC to address the personalization issue.
1 code implementation • EMNLP 2020 • Xueliang Zhao, Wei Wu, Can Xu, Chongyang Tao, Dongyan Zhao, Rui Yan
We study knowledge-grounded dialogue generation with pre-trained language models.
no code implementations • 14 Sep 2020 • Ruijian Xu, Chongyang Tao, Daxin Jiang, Xueliang Zhao, Dongyan Zhao, Rui Yan
To address these issues, in this paper, we propose learning a context-response matching model with auxiliary self-supervised tasks designed for the dialogue data based on pre-trained language models.
Ranked #5 on Conversational Response Selection on E-commerce
1 code implementation • NeurIPS 2020 • Linxiao Li, Can Xu, Wei Wu, Yufan Zhao, Xueliang Zhao, Chongyang Tao
While neural conversation models have shown great potentials towards generating informative and engaging responses via introducing external knowledge, learning such a model often requires knowledge-grounded dialogues that are difficult to obtain.
no code implementations • ICLR 2020 • Xueliang Zhao, Wei Wu, Chongyang Tao, Can Xu, Dongyan Zhao, Rui Yan
In such a low-resource setting, we devise a disentangled response decoder in order to isolate parameters that depend on knowledge-grounded dialogues from the entire generation model.
no code implementations • 11 Jun 2019 • Xueliang Zhao, Chongyang Tao, Wei Wu, Can Xu, Dongyan Zhao, Rui Yan
We present a document-grounded matching network (DGMN) for response selection that can power a knowledge-aware retrieval-based chatbot system.