no code implementations • 18 Dec 2024 • Wei Tang, Yixin Cao, Yang Deng, Jiahao Ying, Bo wang, Yizhe Yang, Yuyue Zhao, Qi Zhang, Xuanjing Huang, Yugang Jiang, Yong Liao
Knowledge utilization is a critical aspect of LLMs, and understanding how they adapt to evolving knowledge is essential for their effective deployment.
1 code implementation • 18 Nov 2024 • Jiawei Li, Xinyue Liang, Yizhe Yang, Chong Feng, Yang Gao
Process supervision enhances the performance of large language models in reasoning tasks by providing feedback at each step of chain-of-thought reasoning.
no code implementations • 16 May 2024 • Yizhe Yang, Palakorn Achananuparp, Heyan Huang, Jing Jiang, Ee-Peng Lim
The recent success of large language models (LLMs) has attracted widespread interest to develop role-playing conversational agents personalized to the characteristics and styles of different speakers to enhance their abilities to perform both general and special purpose dialogue tasks.
no code implementations • 19 Feb 2024 • Jiahao Ying, Yixin Cao, Yushi Bai, Qianru Sun, Bo wang, Wei Tang, Zhaojun Ding, Yizhe Yang, Xuanjing Huang, Shuicheng Yan
There are two updating strategies: 1) mimicking strategy to generate similar samples based on original data, preserving stylistic and contextual essence, and 2) extending strategy that further expands existing samples at varying cognitive levels by adapting Bloom's taxonomy of educational objectives.
no code implementations • 13 Dec 2023 • Yizhe Yang, Heyan Huang, Yihang Liu, Yang Gao
Knowledge-grounded dialogue is a task of generating an informative response based on both the dialogue history and external knowledge source.
2 code implementations • 14 Nov 2023 • Huashan Sun, Yixiao Wu, Yuhao Ye, Yizhe Yang, Yinghao Li, Jiawei Li, Yang Gao
Language style is necessary for AI systems to understand and generate diverse human language accurately.
no code implementations • 24 Oct 2023 • Yizhe Yang, Huashan Sun, Jiawei Li, Runheng Liu, Yinghao Li, Yuhang Liu, Heyan Huang, Yang Gao
Large Language Models (LLMs) have demonstrated remarkable performance across various natural language tasks, marking significant strides towards general artificial intelligence.
no code implementations • 27 Apr 2022 • Yizhe Yang, Heyan Huang, Yang Gao, Jiawei Li and
However, it is a challenge for the current sequence-based model to acquire knowledge from complex documents and integrate it to perform correct responses without the aid of an explicit semantic structure.
no code implementations • 17 Mar 2022 • Jiawei Li, Mucheng Ren, Yang Gao, Yizhe Yang
Specifically, we carefully design an end-to-end QG module on the basis of a classical QA module, which could help the model understand the context by asking inherently logical sub-questions, thus inheriting interpretability from the QD-based method and showing superior performance.