Search Results for author: Hanmeng Liu

Found 9 papers, 7 papers with code

Break the Chain: Large Language Models Can be Shortcut Reasoners

no code implementations4 Jun 2024 Mengru Ding, Hanmeng Liu, Zhizhang Fu, Jian Song, WenBo Xie, Yue Zhang

We propose the integration of human-like heuristics and shortcuts into language models (LMs) through "break the chain" strategies.

Logic Agent: Enhancing Validity with Logic Rule Invocation

no code implementations28 Apr 2024 Hanmeng Liu, Zhiyang Teng, Chaoli Zhang, Yue Zhang

Chain-of-Thought (CoT) prompting has emerged as a pivotal technique for augmenting the inferential capabilities of language models during reasoning tasks.

Informativeness Navigate

GLoRE: Evaluating Logical Reasoning of Large Language Models

1 code implementation13 Oct 2023 Hanmeng Liu, Zhiyang Teng, Ruoxi Ning, Jian Liu, Qiji Zhou, Yue Zhang

Recently, large language models (LLMs), including notable models such as GPT-4 and burgeoning community models, have showcased significant general language understanding abilities.

Logical Reasoning Natural Language Understanding

LogiCoT: Logical Chain-of-Thought Instruction-Tuning

1 code implementation20 May 2023 Hanmeng Liu, Zhiyang Teng, Leyang Cui, Chaoli Zhang, Qiji Zhou, Yue Zhang

LogiCoT serves as an instruction set for teaching models of logical reasoning and elicits general reasoning skills.

Logical Reasoning Text Generation

Evaluating the Logical Reasoning Ability of ChatGPT and GPT-4

1 code implementation7 Apr 2023 Hanmeng Liu, Ruoxi Ning, Zhiyang Teng, Jian Liu, Qiji Zhou, Yue Zhang

With the release of Generative Pretrained Transformer 4 (GPT-4), highlighted as "advanced" at reasoning tasks, we are eager to learn the GPT-4 performance on various logical reasoning tasks.

Logical Reasoning Natural Language Inference +2

GLUE-X: Evaluating Natural Language Understanding Models from an Out-of-distribution Generalization Perspective

1 code implementation15 Nov 2022 Linyi Yang, Shuibai Zhang, Libo Qin, Yafu Li, Yidong Wang, Hanmeng Liu, Jindong Wang, Xing Xie, Yue Zhang

Pre-trained language models (PLMs) are known to improve the generalization performance of natural language understanding models by leveraging large amounts of data during the pre-training phase.

Natural Language Understanding Out-of-Distribution Generalization

Natural Language Inference in Context -- Investigating Contextual Reasoning over Long Texts

1 code implementation10 Nov 2020 Hanmeng Liu, Leyang Cui, Jian Liu, Yue Zhang

Natural language inference (NLI) is a fundamental NLP task, investigating the entailment relationship between two texts.

Logical Reasoning Natural Language Inference +1

LogiQA: A Challenge Dataset for Machine Reading Comprehension with Logical Reasoning

2 code implementations16 Jul 2020 Jian Liu, Leyang Cui, Hanmeng Liu, Dandan Huang, Yile Wang, Yue Zhang

Machine reading is a fundamental task for testing the capability of natural language understanding, which is closely related to human cognition in many aspects.

Logical Reasoning Machine Reading Comprehension +1

Cannot find the paper you are looking for? You can Submit a new open access paper.