Search Results for author: Hongzhan Chen

Found 5 papers, 4 papers with code

RoleInteract: Evaluating the Social Interaction of Role-Playing Agents

1 code implementation20 Mar 2024 Hongzhan Chen, Hehong Chen, Ming Yan, Wenshen Xu, Xing Gao, Weizhou Shen, Xiaojun Quan, Chenliang Li, Ji Zhang, Fei Huang, Jingren Zhou

In this paper, we introduce RoleInteract, the first benchmark designed to systematically evaluate the sociality of role-playing conversational agents at both individual and group levels of social interactions.

Small LLMs Are Weak Tool Learners: A Multi-LLM Agent

1 code implementation14 Jan 2024 Weizhou Shen, Chenliang Li, Hongzhan Chen, Ming Yan, Xiaojun Quan, Hehong Chen, Ji Zhang, Fei Huang

Each component is implemented by a single LLM that focuses on a specific capability and collaborates with others to accomplish the task.

Language Modelling Large Language Model

Knowledge Distillation for Closed-Source Language Models

no code implementations13 Jan 2024 Hongzhan Chen, Xiaojun Quan, Hehong Chen, Ming Yan, Ji Zhang

The prior estimation aims to derive a prior distribution by utilizing the corpus generated by closed-source language models, while the posterior estimation employs a proxy model to update the prior distribution and derive a posterior distribution.

Knowledge Distillation

MCC-KD: Multi-CoT Consistent Knowledge Distillation

1 code implementation23 Oct 2023 Hongzhan Chen, Siyue Wu, Xiaojun Quan, Rui Wang, Ming Yan, Ji Zhang

Large language models (LLMs) have showcased remarkable capabilities in complex reasoning through chain of thought (CoT) prompting.

Knowledge Distillation Mathematical Reasoning

AD-KD: Attribution-Driven Knowledge Distillation for Language Model Compression

1 code implementation17 May 2023 Siyue Wu, Hongzhan Chen, Xiaojun Quan, Qifan Wang, Rui Wang

To enhance the knowledge transfer of model reasoning and generalization, we further explore multi-view attribution distillation on all potential decisions of the teacher.

Knowledge Distillation Language Modelling +2

Cannot find the paper you are looking for? You can Submit a new open access paper.