no code implementations • 15 Oct 2024 • Yejin Kim, Eojin Kang, Juae Kim, H. Howie Huang
Large language models (LLMs) typically improve performance by either retrieving semantically similar information, or enhancing reasoning abilities through structured prompts like chain-of-thought.
no code implementations • 18 Apr 2022 • Hwanhee Lee, Cheoneum Park, Seunghyun Yoon, Trung Bui, Franck Dernoncourt, Juae Kim, Kyomin Jung
In this paper, we propose an efficient factual error correction system RFEC based on entities retrieval post-editing process.
1 code implementation • 30 Sep 2021 • Minwoo Lee, Seungpil Won, Juae Kim, Hwanhee Lee, Cheoneum Park, Kyomin Jung
Specifically, we employ a two-stage augmentation pipeline to generate new claims and evidences from existing samples.
no code implementations • SEMEVAL 2019 • Cheoneum Park, Juae Kim, Hyeon-gu Lee, Reinald Kim Amplayo, Harksoo Kim, Jungyun Seo, Chang-Ki Lee
This paper describes our system, Joint Encoders for Stable Suggestion Inference (JESSI), for the SemEval 2019 Task 9: Suggestion Mining from Online Reviews and Forums.
no code implementations • WS 2017 • Juae Kim, Sunjae Kwon, Youngjoong Ko, Jungyun Seo
To generate a large amount of machine-labeled data, firstly we generate an initial machine-labeled data by using a chunker and MetaMap.