Search Results for author: Yizhe Yang

Found 6 papers, 0 papers with code

Have Seen Me Before? Automating Dataset Updates Towards Reliable and Timely Evaluation

no code implementations19 Feb 2024 Jiahao Ying, Yixin Cao, Bo wang, Wei Tang, Yizhe Yang, Shuicheng Yan

The basic idea is to generate unseen and high-quality testing samples based on existing ones to mitigate leakage issues.

Graph vs. Sequence: An Empirical Study on Knowledge Forms for Knowledge-Grounded Dialogue

no code implementations13 Dec 2023 Yizhe Yang, Heyan Huang, Yihang Liu, Yang Gao

Knowledge-grounded dialogue is a task of generating an informative response based on both the dialogue history and external knowledge source.

Knowledge Graphs Model Selection

TSST: A Benchmark and Evaluation Models for Text Speech-Style Transfer

no code implementations14 Nov 2023 Huashan Sun, Yixiao Wu, Yinghao Li, Jiawei Li, Yizhe Yang, Yang Gao

In summary, we present the TSST task, a new benchmark for style transfer and emphasizing human-oriented evaluation, exploring and advancing the performance of current LLMs.

Style Transfer Text Style Transfer

MindLLM: Pre-training Lightweight Large Language Model from Scratch, Evaluations and Domain Applications

no code implementations24 Oct 2023 Yizhe Yang, Huashan Sun, Jiawei Li, Runheng Liu, Yinghao Li, Yuhang Liu, Heyan Huang, Yang Gao

Large Language Models (LLMs) have demonstrated remarkable performance across various natural language tasks, marking significant strides towards general artificial intelligence.

Language Modelling Large Language Model

$G^2$: Enhance Knowledge Grounded Dialogue via Ground Graph

no code implementations27 Apr 2022 Yizhe Yang, Yang Gao, Jiawei Li, Heyan Huang

Besides, a Ground Graph Aware Transformer ($G^2AT$) is proposed to enhance knowledge grounded response generation.

Response Generation

Ask to Understand: Question Generation for Multi-hop Question Answering

no code implementations17 Mar 2022 Jiawei Li, Mucheng Ren, Yang Gao, Yizhe Yang

Specifically, we carefully design an end-to-end QG module on the basis of a classical QA module, which could help the model understand the context by asking inherently logical sub-questions, thus inheriting interpretability from the QD-based method and showing superior performance.

Multi-hop Question Answering Question Answering +2

Cannot find the paper you are looking for? You can Submit a new open access paper.