Search Results for author: Bocheng Zhou

Found 2 papers, 2 papers with code

Do We Truly Need So Many Samples? Multi-LLM Repeated Sampling Efficiently Scales Test-Time Compute

1 code implementation1 Apr 2025 Jianhao Chen, Zishuo Xun, Bocheng Zhou, Han Qi, Hangfan Zhang, Qiaosheng Zhang, Yang Chen, Wei Hu, Yuzhong Qu, Wanli Ouyang, Shuyue Hu

This paper presents a simple, effective, and cost-efficient strategy to improve LLM performance by scaling test-time compute.

Graph Attention is Not Always Beneficial: A Theoretical Analysis of Graph Attention Mechanisms via Contextual Stochastic Block Models

1 code implementation20 Dec 2024 Zhongtian Ma, Qiaosheng Zhang, Bocheng Zhou, Yexin Zhang, Shuyue Hu, Zhen Wang

Specifically, by appropriately defining \emph{structure noise} and \emph{feature noise} in graphs, we show that graph attention mechanisms can enhance classification performance when structure noise exceeds feature noise.

Graph Attention Node Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.