1 code implementation • 22 Feb 2024 • Seungduk Kim, Seungtaek Choi, Myeongho Jeong
This report introduces \texttt{EEVE-Korean-v1. 0}, a Korean adaptation of large language models that exhibit remarkable capabilities across English and Korean text understanding.
1 code implementation • CVPR 2023 • Hyojun Go, Yunsung Lee, Jin-Young Kim, SeungHyun Lee, Myeongho Jeong, Hyun Seung Lee, Seungtaek Choi
For that, the existing practice is to fine-tune the guidance models with labeled data corrupted with noises.
1 code implementation • 21 Nov 2022 • Hyeongdon Moon, Yoonseok Yang, Jamin Shin, Hangyeol Yu, SeungHyun Lee, Myeongho Jeong, Juneyoung Park, Minsam Kim, Seungtaek Choi
They fail to evaluate the MCQ's ability to assess the student's knowledge of the corresponding target fact.
1 code implementation • EMNLP 2021 • Jihyuk Kim, Myeongho Jeong, Seungtaek Choi, Seung-won Hwang
The second phase, encoding structure, builds a graph of keyphrases and the given document to obtain the structure-aware representation of the augmented text.
1 code implementation • 6 Mar 2023 • Hangyeol Yu, Myeongho Jeong, Jamin Shin, Hyeongdon Moon, Juneyoung Park, Seungtaek Choi
Large Pre-trained Language Models (PLM) have become the most desirable starting point in the field of NLP, as they have become remarkably good at solving many individual tasks.
no code implementations • EMNLP (sustainlp) 2020 • Seungtaek Choi, Myeongho Jeong, Jinyoung Yeo, Seung-won Hwang
This paper studies label augmentation for training dialogue response selection.
no code implementations • 26 May 2023 • Shinhyeok Oh, Hyojun Go, Hyeongdon Moon, Yunsung Lee, Myeongho Jeong, Hyun Seung Lee, Seungtaek Choi
To this end, we propose to paraphrase the reference question for a more robust QG evaluation.
no code implementations • 30 May 2023 • Hyun Seung Lee, Seungtaek Choi, Yunsung Lee, Hyeongdon Moon, Shinhyeok Oh, Myeongho Jeong, Hyojun Go, Christian Wallraven
To mitigate these issues, here we propose a novel retrieval approach CEAA that provides effective learning in educational text classification.
no code implementations • 8 Jun 2023 • Yunsung Lee, Jin-Young Kim, Hyojun Go, Myeongho Jeong, Shinhyeok Oh, Seungtaek Choi
In this paper, we address the performance degradation of efficient diffusion models by introducing Multi-architecturE Multi-Expert diffusion models (MEME).