1 code implementation • 23 May 2023 • Seungone Kim, Se June Joo, Doyoung Kim, Joel Jang, Seonghyeon Ye, Jamin Shin, Minjoon Seo
Large Language Models (LLMs) have shown enhanced capabilities of solving novel tasks by reasoning step-by-step known as Chain-of-Thought (CoT) reasoning; how can we instill the same capability of reasoning step-by-step on unseen tasks into LMs that possess less than <100B parameters?
1 code implementation • 7 Feb 2023 • Joel Jang, Seungone Kim, Seonghyeon Ye, Doyoung Kim, Lajanugen Logeswaran, Moontae Lee, Kyungjae Lee, Minjoon Seo
Recently, Language Models (LMs) instruction-tuned on multiple tasks, also known as multitask-prompted fine-tuning (MT), have shown the capability to generalize to unseen tasks.
1 code implementation • 6 Oct 2022 • Seonghyeon Ye, Doyoung Kim, Joel Jang, Joongbo Shin, Minjoon Seo
Meta-training, which fine-tunes the language model (LM) on various downstream tasks by maximizing the likelihood of the target label given the task instruction and input instance, has improved the zero-shot task generalization performance.
1 code implementation • 6 Oct 2022 • Seonghyeon Ye, Joel Jang, Doyoung Kim, Yongrae Jo, Minjoon Seo
During zero-shot inference with language models (LMs), using hard prompts alone may not be able to fully describe the target task.
no code implementations • 25 Apr 2019 • Kyongsik Yun, Luan Nguyen, Tuan Nguyen, Doyoung Kim, Sarah Eldin, Alexander Huyen, Thomas Lu, Edward Chow
We compared the performance between the auto-detection system and the human eye.
1 code implementation • ICCV 2017 • Inwoong Lee, Doyoung Kim, Seoungyoon Kang, Sang-Hoon Lee
In our network, we utilize an average ensemble among multiple parts as a final feature to capture various temporal dependencies.
Ranked #86 on
Skeleton Based Action Recognition
on NTU RGB+D