no code implementations • 19 Feb 2024 • Jiahao Ying, Yixin Cao, Bo wang, Wei Tang, Yizhe Yang, Shuicheng Yan
The basic idea is to generate unseen and high-quality testing samples based on existing ones to mitigate leakage issues.
no code implementations • 13 Dec 2023 • Yizhe Yang, Heyan Huang, Yihang Liu, Yang Gao
Knowledge-grounded dialogue is a task of generating an informative response based on both the dialogue history and external knowledge source.
no code implementations • 14 Nov 2023 • Huashan Sun, Yixiao Wu, Yinghao Li, Jiawei Li, Yizhe Yang, Yang Gao
In summary, we present the TSST task, a new benchmark for style transfer and emphasizing human-oriented evaluation, exploring and advancing the performance of current LLMs.
no code implementations • 24 Oct 2023 • Yizhe Yang, Huashan Sun, Jiawei Li, Runheng Liu, Yinghao Li, Yuhang Liu, Heyan Huang, Yang Gao
Large Language Models (LLMs) have demonstrated remarkable performance across various natural language tasks, marking significant strides towards general artificial intelligence.
no code implementations • 27 Apr 2022 • Yizhe Yang, Yang Gao, Jiawei Li, Heyan Huang
Besides, a Ground Graph Aware Transformer ($G^2AT$) is proposed to enhance knowledge grounded response generation.
no code implementations • 17 Mar 2022 • Jiawei Li, Mucheng Ren, Yang Gao, Yizhe Yang
Specifically, we carefully design an end-to-end QG module on the basis of a classical QA module, which could help the model understand the context by asking inherently logical sub-questions, thus inheriting interpretability from the QD-based method and showing superior performance.