no code implementations • 22 May 2024 • Qiji Zhou, Ruochen Zhou, Zike Hu, Panzhong Lu, Siyang Gao, Yue Zhang
Recent advancements in Chain-of-Thought (CoT) and related rationale-based works have significantly improved the performance of Large Language Models (LLMs) in complex reasoning tasks.
Ranked #5 on Visual Question Answering on MM-Vet
1 code implementation • 13 Oct 2023 • Hanmeng Liu, Zhiyang Teng, Ruoxi Ning, Jian Liu, Qiji Zhou, Yue Zhang
Recently, large language models (LLMs), including notable models such as GPT-4 and burgeoning community models, have showcased significant general language understanding abilities.
1 code implementation • 20 May 2023 • Hanmeng Liu, Zhiyang Teng, Leyang Cui, Chaoli Zhang, Qiji Zhou, Yue Zhang
LogiCoT serves as an instruction set for teaching models of logical reasoning and elicits general reasoning skills.
1 code implementation • 7 Apr 2023 • Hanmeng Liu, Ruoxi Ning, Zhiyang Teng, Jian Liu, Qiji Zhou, Yue Zhang
With the release of Generative Pretrained Transformer 4 (GPT-4), highlighted as "advanced" at reasoning tasks, we are eager to learn the GPT-4 performance on various logical reasoning tasks.
no code implementations • ACL 2020 • Qiji Zhou, Yue Zhang, Donghong Ji, Hao Tang
Abstract Meaning Representations (AMRs) capture sentence-level semantics structural representations to broad-coverage natural sentences.
no code implementations • ACL 2020 • Hao Tang, Donghong Ji, Chenliang Li, Qiji Zhou
The idea is to allow the dependency graph to guide the representation learning of the transformer encoder and vice versa.