Search Results for author: Seungone Kim

Found 5 papers, 5 papers with code

The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning

1 code implementation23 May 2023 Seungone Kim, Se June Joo, Doyoung Kim, Joel Jang, Seonghyeon Ye, Jamin Shin, Minjoon Seo

Large Language Models (LLMs) have shown enhanced capabilities of solving novel tasks by reasoning step-by-step known as Chain-of-Thought (CoT) reasoning; how can we instill the same capability of reasoning step-by-step on unseen tasks into LMs that possess less than <100B parameters?

Few-Shot Learning

CoTEVer: Chain of Thought Prompting Annotation Toolkit for Explanation Verification

1 code implementation7 Mar 2023 Seungone Kim, Se June Joo, Yul Jang, Hyungjoo Chae, Jinyoung Yeo

To improve the correctness of the explanations, fine-tuning language models with explanation data is needed.

Exploring the Benefits of Training Expert Language Models over Instruction Tuning

1 code implementation7 Feb 2023 Joel Jang, Seungone Kim, Seonghyeon Ye, Doyoung Kim, Lajanugen Logeswaran, Moontae Lee, Kyungjae Lee, Minjoon Seo

Recently, Language Models (LMs) instruction-tuned on multiple tasks, also known as multitask-prompted fine-tuning (MT), have shown the capability to generalize to unseen tasks.

Mind the Gap! Injecting Commonsense Knowledge for Abstractive Dialogue Summarization

1 code implementation COLING 2022 Seungone Kim, Se June Joo, Hyungjoo Chae, Chaehyeong Kim, Seung-won Hwang, Jinyoung Yeo

In this paper, we propose to leverage the unique characteristics of dialogues sharing commonsense knowledge across participants, to resolve the difficulties in summarizing them.

Abstractive Dialogue Summarization Multi-Task Learning +1

Can Language Models perform Abductive Commonsense Reasoning?

1 code implementation7 Jul 2022 Seungone Kim

Abductive Reasoning is a task of inferring the most plausible hypothesis given a set of observations.

Cannot find the paper you are looking for? You can Submit a new open access paper.