Search Results for author: Xueliang Zhao

Found 18 papers, 7 papers with code

There Are a Thousand Hamlets in a Thousand People’s Eyes: Enhancing Knowledge-grounded Dialogue with Personal Memory

no code implementations ACL 2022 Tingchen Fu, Xueliang Zhao, Chongyang Tao, Ji-Rong Wen, Rui Yan

Knowledge-grounded conversation (KGC) shows great potential in building an engaging and knowledgeable chatbot, and knowledge selection is a key ingredient in it.

Chatbot

Forewarned is Forearmed: Leveraging LLMs for Data Synthesis through Failure-Inducing Exploration

no code implementations22 Oct 2024 Qintong Li, Jiahui Gao, Sheng Wang, Renjie Pi, Xueliang Zhao, Chuan Wu, Xin Jiang, Zhenguo Li, Lingpeng Kong

In this paper, we present a novel approach, ReverseGen, designed to automatically generate effective training samples that expose the weaknesses of LLMs.

Math

SubgoalXL: Subgoal-based Expert Learning for Theorem Proving

1 code implementation20 Aug 2024 Xueliang Zhao, Lin Zheng, Haige Bo, Changran Hu, Urmish Thakker, Lingpeng Kong

This paper introduces SubgoalXL, a novel approach that synergizes subgoal-based proofs with expert learning to enhance LLMs' capabilities in formal theorem proving within the Isabelle environment.

Automated Theorem Proving

SEGO: Sequential Subgoal Optimization for Mathematical Problem-Solving

no code implementations19 Oct 2023 Xueliang Zhao, Xinting Huang, Wei Bi, Lingpeng Kong

Large Language Models (LLMs) have driven substantial progress in artificial intelligence in recent years, exhibiting impressive capabilities across a wide range of tasks, including mathematical problem-solving.

GSM8K Math +1

VSTAR: A Video-grounded Dialogue Dataset for Situated Semantic Understanding with Scene and Topic Transitions

1 code implementation30 May 2023 Yuxuan Wang, Zilong Zheng, Xueliang Zhao, Jinpeng Li, Yueqian Wang, Dongyan Zhao

Video-grounded dialogue understanding is a challenging problem that requires machine to perceive, parse and reason over situated semantics extracted from weakly aligned video and dialogues.

Dialogue Generation Dialogue Understanding +2

Decomposing the Enigma: Subgoal-based Demonstration Learning for Formal Theorem Proving

1 code implementation25 May 2023 Xueliang Zhao, Wenda Li, Lingpeng Kong

Large language models~(LLMs) present an intriguing avenue of exploration in the domain of formal theorem proving.

Ranked #3 on Automated Theorem Proving on miniF2F-test (Pass@100 metric)

Automated Theorem Proving

Towards Efficient Dialogue Pre-training with Transferable and Interpretable Latent Structure

no code implementations22 Oct 2022 Xueliang Zhao, Lemao Liu, Tingchen Fu, Shuming Shi, Dongyan Zhao, Rui Yan

With the availability of massive general-domain dialogue data, pre-trained dialogue generation appears to be super appealing to transfer knowledge from the general domain to downstream applications.

Dialogue Generation

Learning to Express in Knowledge-Grounded Conversation

no code implementations NAACL 2022 Xueliang Zhao, Tingchen Fu, Chongyang Tao, Wei Wu, Dongyan Zhao, Rui Yan

Grounding dialogue generation by extra knowledge has shown great potentials towards building a system capable of replying with knowledgeable and engaging responses.

Dialogue Generation

Learning an Effective Context-Response Matching Model with Self-Supervised Tasks for Retrieval-based Dialogues

no code implementations14 Sep 2020 Ruijian Xu, Chongyang Tao, Daxin Jiang, Xueliang Zhao, Dongyan Zhao, Rui Yan

To address these issues, in this paper, we propose learning a context-response matching model with auxiliary self-supervised tasks designed for the dialogue data based on pre-trained language models.

Conversational Response Selection Retrieval

Zero-Resource Knowledge-Grounded Dialogue Generation

1 code implementation NeurIPS 2020 Linxiao Li, Can Xu, Wei Wu, Yufan Zhao, Xueliang Zhao, Chongyang Tao

While neural conversation models have shown great potentials towards generating informative and engaging responses via introducing external knowledge, learning such a model often requires knowledge-grounded dialogues that are difficult to obtain.

Dialogue Generation

Low-Resource Knowledge-Grounded Dialogue Generation

no code implementations ICLR 2020 Xueliang Zhao, Wei Wu, Chongyang Tao, Can Xu, Dongyan Zhao, Rui Yan

In such a low-resource setting, we devise a disentangled response decoder in order to isolate parameters that depend on knowledge-grounded dialogues from the entire generation model.

Decoder Dialogue Generation +1

A Document-grounded Matching Network for Response Selection in Retrieval-based Chatbots

no code implementations11 Jun 2019 Xueliang Zhao, Chongyang Tao, Wei Wu, Can Xu, Dongyan Zhao, Rui Yan

We present a document-grounded matching network (DGMN) for response selection that can power a knowledge-aware retrieval-based chatbot system.

Chatbot Retrieval

Cannot find the paper you are looking for? You can Submit a new open access paper.