Search Results for author: Hangfan Zhang

Found 3 papers, 1 papers with code

Do We Truly Need So Many Samples? Multi-LLM Repeated Sampling Efficiently Scales Test-Time Compute

1 code implementation1 Apr 2025 Jianhao Chen, Zishuo Xun, Bocheng Zhou, Han Qi, Hangfan Zhang, Qiaosheng Zhang, Yang Chen, Wei Hu, Yuzhong Qu, Wanli Ouyang, Shuyue Hu

This paper presents a simple, effective, and cost-efficient strategy to improve LLM performance by scaling test-time compute.

If Multi-Agent Debate is the Answer, What is the Question?

no code implementations12 Feb 2025 Hangfan Zhang, Zhiyao Cui, Xinrun Wang, Qiaosheng Zhang, Zhen Wang, Dinghao Wu, Shuyue Hu

Multi-agent debate (MAD) has emerged as a promising approach to enhance the factual accuracy and reasoning quality of large language models (LLMs) by engaging multiple agents in iterative discussions during inference.

On the Safety of Open-Sourced Large Language Models: Does Alignment Really Prevent Them From Being Misused?

no code implementations2 Oct 2023 Hangfan Zhang, Zhimeng Guo, Huaisheng Zhu, Bochuan Cao, Lu Lin, Jinyuan Jia, Jinghui Chen, Dinghao Wu

A natural question is "could alignment really prevent those open-sourced large language models from being misused to generate undesired content?''.

Text Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.