Search Results for author: Zezhong Wang

Found 19 papers, 7 papers with code

ToolACE-R: Tool Learning with Adaptive Self-Refinement

no code implementations2 Apr 2025 Xingshan Zeng, Weiwen Liu, Xu Huang, Zezhong Wang, Lingzhi Wang, Liangyou Li, Yasheng Wang, Lifeng Shang, Xin Jiang, Ruiming Tang, Qun Liu

Tool learning, which allows Large Language Models (LLMs) to leverage external tools for solving complex user tasks, has emerged as a promising avenue for extending model capabilities.

Computational Efficiency

FReM: A Flexible Reasoning Mechanism for Balancing Quick and Slow Thinking in Long-Context Question Answering

no code implementations29 Mar 2025 Zhengyi Zhao, Shubo Zhang, Zezhong Wang, Bin Liang, Binyang Li, Kam-Fai Wong

Long-context question-answering (LCQA) systems have greatly benefited from the powerful reasoning capabilities of large language models (LLMs), which can be categorized into slow and quick reasoning modes.

Question Answering

MlingConf: A Comprehensive Study of Multilingual Confidence Estimation on Large Language Models

1 code implementation16 Oct 2024 Boyang Xue, Hongru Wang, Rui Wang, Sheng Wang, Zezhong Wang, Yiming Du, Bin Liang, Kam-Fai Wong

This paper addresses this gap by introducing a comprehensive investigation of Multilingual Confidence estimation (MlingConf) on LLMs, focusing on both language-agnostic (LA) and language-specific (LS) tasks to explore the performance and language dominance effects of multilingual confidence estimations on different tasks.

ToolACE: Winning the Points of LLM Function Calling

no code implementations2 Sep 2024 Weiwen Liu, Xu Huang, Xingshan Zeng, Xinlong Hao, Shuai Yu, Dexun Li, Shuai Wang, Weinan Gan, Zhengying Liu, Yuanqing Yu, Zezhong Wang, Yuxian Wang, Wu Ning, Yutai Hou, Bin Wang, Chuhan Wu, Xinzhi Wang, Yong liu, Yasheng Wang, Duyu Tang, Dandan Tu, Lifeng Shang, Xin Jiang, Ruiming Tang, Defu Lian, Qun Liu, Enhong Chen

Function calling significantly extends the application boundary of large language models, where high-quality and diverse training data is critical for unlocking this capability.

Chain-of-Probe: Examing the Necessity and Accuracy of CoT Step-by-Step

no code implementations23 Jun 2024 Zezhong Wang, Xingshan Zeng, Weiwen Liu, YuFei Wang, Liangyou Li, Yasheng Wang, Lifeng Shang, Xin Jiang, Qun Liu, Kam-Fai Wong

To address these questions, we propose a method, namely Chain-of-Probe (CoP), to probe changes in the mind during the model's reasoning.

valid

Gen4DS: Workshop on Data Storytelling in an Era of Generative AI

no code implementations2 Apr 2024 Xingyu Lan, Leni Yang, Zezhong Wang, Yun Wang, Danqing Shi, Sheelagh Carpendale

Storytelling is an ancient and precious human ability that has been rejuvenated in the digital age.

A Comprehensive Study of Multilingual Confidence Estimation on Large Language Models

1 code implementation21 Feb 2024 Boyang Xue, Hongru Wang, Rui Wang, Sheng Wang, Zezhong Wang, Yiming Du, Bin Liang, Kam-Fai Wong

This paper addresses this gap by introducing a comprehensive investigation of Multilingual Confidence estimation (MlingConf) on LLMs, focusing on both language-agnostic (LA) and language-specific (LS) tasks to explore the performance and language dominance effects of multilingual confidence estimations on different tasks.

UniMS-RAG: A Unified Multi-source Retrieval-Augmented Generation for Personalized Dialogue Systems

no code implementations24 Jan 2024 Hongru Wang, WenYu Huang, Yang Deng, Rui Wang, Zezhong Wang, YuFei Wang, Fei Mi, Jeff Z. Pan, Kam-Fai Wong

To better plan and incorporate the use of multiple sources in generating personalized response, we firstly decompose it into three sub-tasks: Knowledge Source Selection, Knowledge Retrieval, and Response Generation.

RAG Response Generation +2

JoTR: A Joint Transformer and Reinforcement Learning Framework for Dialog Policy Learning

1 code implementation1 Sep 2023 Wai-Chung Kwan, Huimin Wang, Hongru Wang, Zezhong Wang, Xian Wu, Yefeng Zheng, Kam-Fai Wong

In addition, JoTR employs reinforcement learning with a reward-shaping mechanism to efficiently finetune the word-level dialogue policy, which allows the model to learn from its interactions, improving its performance over time.

Action Generation Diversity

Empower Large Language Model to Perform Better on Industrial Domain-Specific Question Answering

1 code implementation19 May 2023 Fangkai Yang, Pu Zhao, Zezhong Wang, Lu Wang, Jue Zhang, Mohit Garg, QIngwei Lin, Saravan Rajmohan, Dongmei Zhang

Large Language Model (LLM) has gained popularity and achieved remarkable results in open-domain tasks, but its performance in real industrial domain-specific scenarios is average due to its lack of specific domain knowledge.

Language Modeling Language Modelling +3

Cue-CoT: Chain-of-thought Prompting for Responding to In-depth Dialogue Questions with LLMs

2 code implementations19 May 2023 Hongru Wang, Rui Wang, Fei Mi, Yang Deng, Zezhong Wang, Bin Liang, Ruifeng Xu, Kam-Fai Wong

Large Language Models (LLMs), such as \texttt{ChatGPT}, greatly empower dialogue systems with strong language understanding and generation capabilities.

Question Answering Semantic Similarity +1

Testability-Aware Low Power Controller Design with Evolutionary Learning

1 code implementation26 Nov 2021 Min Li, Zhengyuan Shi, Zezhong Wang, Weiwei Zhang, Yu Huang, Qiang Xu

The proposed GA-guided XORNets also allows reducing the number of control bits, and the total testing time decreases by 20. 78% on average and up to 47. 09% compared to the existing design without sacrificing test coverage.

Integrating Pretrained Language Model for Dialogue Policy Learning

no code implementations2 Nov 2021 Hongru Wang, Huimin Wang, Zezhong Wang, Kam-Fai Wong

Reinforcement Learning (RL) has been witnessed its potential for training a dialogue policy agent towards maximizing the accumulated rewards given from users.

Language Modeling Language Modelling +2

Cannot find the paper you are looking for? You can Submit a new open access paper.