Search Results for author: Takeshi Kojima

Found 4 papers, 3 papers with code

Unnatural Error Correction: GPT-4 Can Almost Perfectly Handle Unnatural Scrambled Text

1 code implementation30 Nov 2023 Qi Cao, Takeshi Kojima, Yutaka Matsuo, Yusuke Iwasawa

While Large Language Models (LLMs) have achieved remarkable performance in many tasks, much about their inner workings remains unclear.

Large Language Models are Zero-Shot Reasoners

2 code implementations24 May 2022 Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, Yusuke Iwasawa

Pretrained large language models (LLMs) are widely used in many sub-fields of natural language processing (NLP) and generally known as excellent few-shot learners with task-specific exemplars.

Arithmetic Reasoning Date Understanding +3

Cannot find the paper you are looking for? You can Submit a new open access paper.