Search Results for author: Doyoung Kim

Found 6 papers, 5 papers with code

The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning

1 code implementation23 May 2023 Seungone Kim, Se June Joo, Doyoung Kim, Joel Jang, Seonghyeon Ye, Jamin Shin, Minjoon Seo

Large Language Models (LLMs) have shown enhanced capabilities of solving novel tasks by reasoning step-by-step known as Chain-of-Thought (CoT) reasoning; how can we instill the same capability of reasoning step-by-step on unseen tasks into LMs that possess less than <100B parameters?

Few-Shot Learning

Exploring the Benefits of Training Expert Language Models over Instruction Tuning

1 code implementation7 Feb 2023 Joel Jang, Seungone Kim, Seonghyeon Ye, Doyoung Kim, Lajanugen Logeswaran, Moontae Lee, Kyungjae Lee, Minjoon Seo

Recently, Language Models (LMs) instruction-tuned on multiple tasks, also known as multitask-prompted fine-tuning (MT), have shown the capability to generalize to unseen tasks.

Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners

1 code implementation6 Oct 2022 Seonghyeon Ye, Doyoung Kim, Joel Jang, Joongbo Shin, Minjoon Seo

Meta-training, which fine-tunes the language model (LM) on various downstream tasks by maximizing the likelihood of the target label given the task instruction and input instance, has improved the zero-shot task generalization performance.

Language Modelling

Retrieval of Soft Prompt Enhances Zero-Shot Task Generalization

1 code implementation6 Oct 2022 Seonghyeon Ye, Joel Jang, Doyoung Kim, Yongrae Jo, Minjoon Seo

During zero-shot inference with language models (LMs), using hard prompts alone may not be able to fully describe the target task.

Retrieval

Cannot find the paper you are looking for? You can Submit a new open access paper.