Search Results for author: Bofei Gao

Found 7 papers, 6 papers with code

LLM Critics Help Catch Bugs in Mathematics: Towards a Better Mathematical Verifier with Natural Language Feedback

1 code implementation20 Jun 2024 Bofei Gao, Zefan Cai, Runxin Xu, Peiyi Wang, Ce Zheng, Runji Lin, Keming Lu, Dayiheng Liu, Chang Zhou, Wen Xiao, Junjie Hu, Tianyu Liu, Baobao Chang

To mitigate the aforementioned insufficiency of binary labels, we introduce step-wise natural language feedbacks as rationale labels (i. e., the correctness of the current step and the explanations).

Binary Classification GSM8K +2

PyramidKV: Dynamic KV Cache Compression based on Pyramidal Information Funneling

1 code implementation4 Jun 2024 Zefan Cai, Yichi Zhang, Bofei Gao, Yuliang Liu, Tianyu Liu, Keming Lu, Wayne Xiong, Yue Dong, Baobao Chang, Junjie Hu, Wen Xiao

Our experimental evaluations, utilizing the LongBench benchmark, show that PyramidKV matches the performance of models with a full KV cache while retaining only 12% of the KV cache, thus significantly reducing memory usage.

Coarse-to-Fine Dual Encoders are Better Frame Identification Learners

1 code implementation20 Oct 2023 Kaikai An, Ce Zheng, Bofei Gao, Haozhe Zhao, Baobao Chang

Recent researches measure the similarity or matching score between targets and candidate frames by modeling frame definitions.

Contrastive Learning Representation Learning +1

Guiding AMR Parsing with Reverse Graph Linearization

1 code implementation13 Oct 2023 Bofei Gao, Liang Chen, Peiyi Wang, Zhifang Sui, Baobao Chang

Abstract Meaning Representation (AMR) parsing aims to extract an abstract semantic graph from a given sentence.

Abstract Meaning Representation AMR Parsing +1

A Two-Stage Method for Chinese AMR Parsing

1 code implementation29 Sep 2022 Liang Chen, Bofei Gao, Baobao Chang

In this paper, we provide a detailed description of our system at CAMRP-2022 evaluation.

AMR Parsing Vocal Bursts Valence Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.