1 code implementation • 22 Feb 2024 • Seungduk Kim, Seungtaek Choi, Myeongho Jeong
This report introduces \texttt{EEVE-Korean-v1. 0}, a Korean adaptation of large language models that exhibit remarkable capabilities across English and Korean text understanding.
1 code implementation • CVPR 2023 • Hyojun Go, Yunsung Lee, Jin-Young Kim, SeungHyun Lee, Myeongho Jeong, Hyun Seung Lee, Seungtaek Choi
For that, the existing practice is to fine-tune the guidance models with labeled data corrupted with noises.
1 code implementation • 21 Nov 2022 • Hyeongdon Moon, Yoonseok Yang, Jamin Shin, Hangyeol Yu, SeungHyun Lee, Myeongho Jeong, Juneyoung Park, Minsam Kim, Seungtaek Choi
They fail to evaluate the MCQ's ability to assess the student's knowledge of the corresponding target fact.
1 code implementation • NeurIPS 2023 • Hyojun Go, Jinyoung Kim, Yunsung Lee, SeungHyun Lee, Shinhyeok Oh, Hyeongdon Moon, Seungtaek Choi
Through this, our approach addresses the issue of negative transfer in diffusion models by allowing for efficient computation of MTL methods.
1 code implementation • EMNLP 2021 • Jihyuk Kim, Myeongho Jeong, Seungtaek Choi, Seung-won Hwang
The second phase, encoding structure, builds a graph of keyphrases and the given document to obtain the structure-aware representation of the augmented text.
1 code implementation • 6 Mar 2023 • Hangyeol Yu, Myeongho Jeong, Jamin Shin, Hyeongdon Moon, Juneyoung Park, Seungtaek Choi
Large Pre-trained Language Models (PLM) have become the most desirable starting point in the field of NLP, as they have become remarkably good at solving many individual tasks.
no code implementations • IJCNLP 2019 • Hojae Han, Seungtaek Choi, Haeju Park, Seung-won Hwang
This paper studies the problem of non-factoid question answering, where the answer may span over multiple sentences.
no code implementations • EMNLP 2020 • Seungtaek Choi, Haeju Park, Jinyoung Yeo, Seung-won Hwang
We aim to leverage human and machine intelligence together for attention supervision.
no code implementations • COLING 2020 • Jihyeok Kim, Seungtaek Choi, Reinald Kim Amplayo, Seung-won Hwang
We thus propose to additionally leverage references, which are selected from a large pool of texts labeled with one of the attributes, as textual information that enriches inductive biases of given attributes.
no code implementations • EMNLP (sustainlp) 2020 • Seungtaek Choi, Myeongho Jeong, Jinyoung Yeo, Seung-won Hwang
This paper studies label augmentation for training dialogue response selection.
no code implementations • Findings (ACL) 2022 • Minji Seo, YeonJoon Jung, Seungtaek Choi, Seung-won Hwang, Bei Liu
We study event understanding as a critical step towards visual commonsense tasks. Meanwhile, we argue that current object-based event understanding is purely likelihood-based, leading to incorrect event prediction, due to biased correlation between events and objects. We propose to mitigate such biases with do-calculus, proposed in causality research, but overcoming its limited robustness, by an optimized aggregation with association-based prediction. We show the effectiveness of our approach, intrinsically by comparing our generated events with ground-truth event annotation, and extrinsically by downstream commonsense tasks.
no code implementations • 26 May 2023 • Shinhyeok Oh, Hyojun Go, Hyeongdon Moon, Yunsung Lee, Myeongho Jeong, Hyun Seung Lee, Seungtaek Choi
To this end, we propose to paraphrase the reference question for a more robust QG evaluation.
no code implementations • 30 May 2023 • Hyun Seung Lee, Seungtaek Choi, Yunsung Lee, Hyeongdon Moon, Shinhyeok Oh, Myeongho Jeong, Hyojun Go, Christian Wallraven
To mitigate these issues, here we propose a novel retrieval approach CEAA that provides effective learning in educational text classification.
no code implementations • 7 Jun 2023 • Jin-Young Kim, Soonwoo Kwon, Hyojun Go, Yunsung Lee, Seungtaek Choi
Self-supervised contrastive learning (CL) has achieved state-of-the-art performance in representation learning by minimizing the distance between positive pairs while maximizing that of negative ones.
no code implementations • 8 Jun 2023 • Yunsung Lee, Jin-Young Kim, Hyojun Go, Myeongho Jeong, Shinhyeok Oh, Seungtaek Choi
In this paper, we address the performance degradation of efficient diffusion models by introducing Multi-architecturE Multi-Expert diffusion models (MEME).
no code implementations • 25 Jun 2023 • Jungbae Park, Seungtaek Choi
However, this study highlights the significant decrease in the performance of speech scoring systems in new question contexts, thereby identifying this as a cold start problem in terms of items.