no code implementations • ICON 2021 • Yeoun Yi, Hyopil Shin
We present LexPOS, a sequence-to-sequence transformer model that generates slogans given phonetic and structural information.
no code implementations • WNUT (ACL) 2021 • Sangah Lee, Hyopil Shin
User-generated texts include various types of stylistic properties, or noises.
no code implementations • 25 Mar 2024 • Dongjun Jang, Sungjoo Byun, Hyopil Shin
This study examines whether the attention scores between tokens in the BERT model significantly vary based on lexical categories during the fine-tuning process for downstream tasks.
no code implementations • 25 Mar 2024 • Dongjun Jang, Sungjoo Byun, Hyemi Jo, Hyopil Shin
Based on the its quality and empirical results, this paper proposes that \textit{KIT-19} has the potential to make a substantial contribution to the future improvement of Korean LLMs' performance.
no code implementations • 24 Mar 2024 • Sungjoo Byun, Jiseung Hong, Sumin Park, Dongjun Jang, Jean Seo, Minseok Kim, Chaeyoung Oh, Hyopil Shin
Named Entity Recognition (NER) plays a pivotal role in medical Natural Language Processing (NLP).
Medical Named Entity Recognition named-entity-recognition +2
no code implementations • 23 Feb 2024 • Dongjun Jang, Jean Seo, Sungjoo Byun, Taekyoung Kim, Minseok Kim, Hyopil Shin
In order to tackle these challenges, we introduce CARBD-Ko (a Contextually Annotated Review Benchmark Dataset for Aspect-Based Sentiment Classification in Korean), a benchmark dataset that incorporates aspects and dual-tagged polarities to distinguish between aspect-specific and aspect-agnostic sentiment classification.
no code implementations • 30 Nov 2023 • Sungjoo Byun, Dongjun Jang, Hyemi Jo, Hyopil Shin
Caution: this paper may include material that could be offensive or distressing.
no code implementations • 23 Nov 2023 • Dongjun Jang, Sangah Lee, Sungjoo Byun, Jinwoong Kim, Jean Seo, Minseok Kim, Soyeon Kim, Chaeyoung Oh, Jaeyoon Kim, Hyemi Jo, Hyopil Shin
This paper presents the DaG LLM (David and Goliath Large Language Model), a language model specialized for Korean and fine-tuned through Instruction Tuning across 41 tasks within 13 distinct categories.
1 code implementation • 10 Aug 2020 • Sangah Lee, Hansol Jang, Yunmee Baik, Suzi Park, Hyopil Shin
Since the appearance of BERT, recent works including XLNet and RoBERTa utilize sentence embedding models pre-trained by large corpora and a large number of parameters.
no code implementations • 31 Dec 2017 • Youngsam Kim, Hyopil Shin
This study implements a vector space model approach to measure the sentiment orientations of words.