2 code implementations • 23 May 2023 • Seungone Kim, Se June Joo, Doyoung Kim, Joel Jang, Seonghyeon Ye, Jamin Shin, Minjoon Seo
Furthermore, we show that instruction tuning with CoT Collection allows LMs to possess stronger few-shot learning capabilities on 4 domain-specific tasks, resulting in an improvement of +2. 24% (Flan-T5 3B) and +2. 37% (Flan-T5 11B), even outperforming ChatGPT utilizing demonstrations until the max length by a +13. 98% margin.
Ranked #1 on
Few-Shot Learning
on PubMedQA
1 code implementation • 7 Mar 2023 • Seungone Kim, Se June Joo, Yul Jang, Hyungjoo Chae, Jinyoung Yeo
To improve the correctness of the explanations, fine-tuning language models with explanation data is needed.
1 code implementation • COLING 2022 • Seungone Kim, Se June Joo, Hyungjoo Chae, Chaehyeong Kim, Seung-won Hwang, Jinyoung Yeo
In this paper, we propose to leverage the unique characteristics of dialogues sharing commonsense knowledge across participants, to resolve the difficulties in summarizing them.
Ranked #2 on
Text Summarization
on DialogSum