Search Results for author: Yizhe Yang

Found 9 papers, 2 papers with code

EvoWiki: Evaluating LLMs on Evolving Knowledge

no code implementations18 Dec 2024 Wei Tang, Yixin Cao, Yang Deng, Jiahao Ying, Bo wang, Yizhe Yang, Yuyue Zhao, Qi Zhang, Xuanjing Huang, Yugang Jiang, Yong Liao

Knowledge utilization is a critical aspect of LLMs, and understanding how they adapt to evolving knowledge is essential for their effective deployment.

RAG

PSPO*: An Effective Process-supervised Policy Optimization for Reasoning Alignment

1 code implementation18 Nov 2024 Jiawei Li, Xinyue Liang, Yizhe Yang, Chong Feng, Yang Gao

Process supervision enhances the performance of large language models in reasoning tasks by providing feedback at each step of chain-of-thought reasoning.

Mathematical Reasoning

Speaker Verification in Agent-Generated Conversations

no code implementations16 May 2024 Yizhe Yang, Palakorn Achananuparp, Heyan Huang, Jing Jiang, Ee-Peng Lim

The recent success of large language models (LLMs) has attracted widespread interest to develop role-playing conversational agents personalized to the characteristics and styles of different speakers to enhance their abilities to perform both general and special purpose dialogue tasks.

Speaker Verification

Automating Dataset Updates Towards Reliable and Timely Evaluation of Large Language Models

no code implementations19 Feb 2024 Jiahao Ying, Yixin Cao, Yushi Bai, Qianru Sun, Bo wang, Wei Tang, Zhaojun Ding, Yizhe Yang, Xuanjing Huang, Shuicheng Yan

There are two updating strategies: 1) mimicking strategy to generate similar samples based on original data, preserving stylistic and contextual essence, and 2) extending strategy that further expands existing samples at varying cognitive levels by adapting Bloom's taxonomy of educational objectives.

MMLU

Graph vs. Sequence: An Empirical Study on Knowledge Forms for Knowledge-Grounded Dialogue

no code implementations13 Dec 2023 Yizhe Yang, Heyan Huang, Yihang Liu, Yang Gao

Knowledge-grounded dialogue is a task of generating an informative response based on both the dialogue history and external knowledge source.

Knowledge Graphs Model Selection

MindLLM: Pre-training Lightweight Large Language Model from Scratch, Evaluations and Domain Applications

no code implementations24 Oct 2023 Yizhe Yang, Huashan Sun, Jiawei Li, Runheng Liu, Yinghao Li, Yuhang Liu, Heyan Huang, Yang Gao

Large Language Models (LLMs) have demonstrated remarkable performance across various natural language tasks, marking significant strides towards general artificial intelligence.

Language Modeling Language Modelling +1

Building Knowledge-Grounded Dialogue Systems with Graph-Based Semantic Modeling

no code implementations27 Apr 2022 Yizhe Yang, Heyan Huang, Yang Gao, Jiawei Li and

However, it is a challenge for the current sequence-based model to acquire knowledge from complex documents and integrate it to perform correct responses without the aid of an explicit semantic structure.

Dialogue Generation Response Generation

Ask to Understand: Question Generation for Multi-hop Question Answering

no code implementations17 Mar 2022 Jiawei Li, Mucheng Ren, Yang Gao, Yizhe Yang

Specifically, we carefully design an end-to-end QG module on the basis of a classical QA module, which could help the model understand the context by asking inherently logical sub-questions, thus inheriting interpretability from the QD-based method and showing superior performance.

Diversity Multi-hop Question Answering +3

Cannot find the paper you are looking for? You can Submit a new open access paper.