Search Results for author: Qiyue Gao

Found 7 papers, 5 papers with code

LLM Reasoners: New Evaluation, Library, and Analysis of Step-by-Step Reasoning with Large Language Models

1 code implementation8 Apr 2024 Shibo Hao, Yi Gu, Haotian Luo, Tianyang Liu, Xiyan Shao, Xinyuan Wang, Shuhua Xie, Haodi Ma, Adithya Samavedhi, Qiyue Gao, Zhen Wang, Zhiting Hu

(2) We develop LLM Reasoners, a library for standardized modular implementation of existing and new reasoning algorithms, under a unified formulation of the search, reward, and world model components.

Extracting Mathematical Concepts with Large Language Models

no code implementations29 Aug 2023 Valeria de Paiva, Qiyue Gao, Pavel Kovalev, Lawrence S. Moss

Where our study diverges from previous work is in (1) providing a more thorough analysis of what makes mathematical term extraction a difficult problem to begin with; (2) paying close attention to inter-annotator disagreements; (3) providing a set of guidelines which both human and machine annotators could use to standardize the extraction process; (4) introducing a new annotation tool to help humans with ATE, applicable to any mathematical field and even beyond mathematics; (5) using prompts to ChatGPT as part of the extraction process, and proposing best practices for such prompts; and (6) raising the question of whether ChatGPT could be used as an annotator on the same level as human experts.

Term Extraction

Curriculum: A Broad-Coverage Benchmark for Linguistic Phenomena in Natural Language Understanding

no code implementations NAACL 2022 Zeming Chen, Qiyue Gao

In the age of large transformer language models, linguistic evaluation play an important role in diagnosing models' abilities and limitations on natural language understanding.

Language Modelling Natural Language Understanding

Probing Linguistic Information For Logical Inference In Pre-trained Language Models

1 code implementation3 Dec 2021 Zeming Chen, Qiyue Gao

We propose a methodology for probing linguistic information for logical inference in pre-trained language model representations.

Language Modelling Natural Language Understanding

Monotonicity Marking from Universal Dependency Trees

1 code implementation IWCS (ACL) 2021 Zeming Chen, Qiyue Gao

Dependency parsing is a tool widely used in the field of Natural language processing and computational linguistics.

Dependency Parsing

Cannot find the paper you are looking for? You can Submit a new open access paper.