no code implementations • 22 Apr 2024 • Yukyung Lee, Soonwon Ka, Bokyung Son, Pilsung Kang, Jaewook Kang
Large Language Models (LLMs) have significantly impacted the writing process, enabling collaborative content creation and enhancing productivity.
no code implementations • 27 Mar 2024 • Yukyung Lee, Joonghoon Kim, Jaehee Kim, Hyowon Cho, Pilsung Kang
We introduce CheckEval, a novel evaluation framework using Large Language Models, addressing the challenges of ambiguity and inconsistency in current evaluation methods.
1 code implementation • 8 Dec 2023 • Yookyung Kho, Jaehee Kim, Pilsung Kang
Recently, prompt-based fine-tuning has garnered considerable interest as a core technique for few-shot text classification task.
1 code implementation • 9 Nov 2023 • Gunho No, Yukyung Lee, Hyeongwon Kang, Pilsung Kang
We introduce RAPID, a model that capitalizes on the inherent features of log data to enable anomaly detection without training delays, ensuring real-time capability.
1 code implementation • 7 Nov 2023 • Joonghoon Kim, Saeran Park, Kiyoon Jeong, Sangmin Lee, Seung Hun Han, Jiyoon Lee, Pilsung Kang
With advanced Large Language Models (LLMs) such as GPT-4, evaluating the quality of Natural Language Generation (NLG) has become increasingly paramount.
no code implementations • 3 Jun 2023 • Yukyung Lee, Jaehee Kim, Doyoon Kim, Yookyung Kho, Younsun Kim, Pilsung Kang
As the e-commerce market continues to expand and online transactions proliferate, customer reviews have emerged as a critical element in shaping the purchasing decisions of prospective buyers.
no code implementations • 2 Mar 2023 • Heejeong Choi, Pilsung Kang
We propose a new time-series representation learning method by combining the advantages of self-supervised tasks related to contextual, temporal, and transformation consistency.
no code implementations • 8 Jul 2022 • Yukyung Lee, Takyoung Kim, Hoonsang Yoon, Pilsung Kang, Junseong Bang, Misuk Kim
Dialogue State Tracking (DST) is critical for comprehensively interpreting user and system utterances, thereby forming the cornerstone of efficient dialogue systems.
1 code implementation • 18 Jun 2022 • Jaehyuk Heo, YongGi Jeong, Sunwoo Kim, Jaehee Kim, Pilsung Kang
We designed a Rich Encoder-decoder framework for Video Event CAptioner (REVECA) that utilizes spatial and temporal information from the video to generate a caption for the corresponding the event boundary.
1 code implementation • 21 Mar 2022 • Yunseung Lee, Pilsung Kang
Therefore, current image anomaly detection methods have commonly used convolutional encoder-decoders to extract normal information through the local features of images.
no code implementations • ACL 2022 • Takyoung Kim, Hoonsang Yoon, Yukyung Lee, Pilsung Kang, Misuk Kim
Dialogue state tracking (DST) aims to extract essential information from multi-turn dialogue situations and take appropriate actions.
no code implementations • 21 Feb 2022 • Heejeong Choi, Subin Kim, Pilsung Kang
Predictive coding is further introduced to encourage the model to learn the temporal dependencies of the time series.
no code implementations • 18 Nov 2021 • Yukyung Lee, Jina Kim, Pilsung Kang
The system log generated in a computer system refers to large-scale data that are collected simultaneously and used as the basic data for determining errors, intrusion and abnormal behaviors.
2 code implementations • 11 Oct 2021 • Jounghee Kim, Pilsung Kang
Wav2vec 2. 0 is an end-to-end framework of self-supervised learning for speech representation that is successful in automatic speech recognition (ASR), but most of the work on the topic has been developed with a single language: English.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +3
no code implementations • 28 Aug 2021 • Takyoung Kim, Yukyung Lee, Hoonsang Yoon, Pilsung Kang, Junseong Bang, Misuk Kim
The primary purpose of dialogue state tracking (DST), a critical component of an end-to-end conversational system, is to build a model that responds well to real-world situations.
no code implementations • 22 Jul 2021 • Junghoon Lee, Jounghee Kim, Pilsung Kang
Language models (LMs) pretrained on a large text corpus and fine-tuned on a downstream text corpus and fine-tuned on a downstream task becomes a de facto training strategy for several natural language processing (NLP) tasks.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Youngbin Ro, Yukyung Lee, Pilsung Kang
In this paper, we propose Multi$^2$OIE, which performs open information extraction (open IE) by combining BERT with multi-head attention.
1 code implementation • 17 Sep 2020 • Youngbin Ro, Yukyung Lee, Pilsung Kang
In this paper, we propose Multi$^2$OIE, which performs open information extraction (open IE) by combining BERT with multi-head attention.
Ranked #7 on Open Information Extraction on CaRB
no code implementations • 16 Jan 2019 • Myeongjun Jang, Pilsung Kang
Sentence embedding is a significant research topic in the field of natural language processing (NLP).
1 code implementation • 16 Aug 2018 • Myeongjun Jang, Pilsung Kang
However, because the performances of sentence classification and sentiment analysis can be enhanced by using a simple sentence representation method, it is not sufficient to claim that these models fully reflect the meanings of sentences based on good performances for such tasks.
1 code implementation • 9 Feb 2018 • Myeongjun Jang, Seungwan Seo, Pilsung Kang
In this paper, we propose a new recurrent neural network (RNN)-based Seq2seq model, RNN semantic variational autoencoder (RNN--SVAE), to better capture the global latent information of a sequence of words.
no code implementations • 28 Sep 2017 • Gichang Lee, Jaeyun Jeong, Seungwan Seo, CzangYeob Kim, Pilsung Kang
In order to maximize the applicability of sentiment analysis results, it is necessary to not only classify the overall sentiment (positive/negative) of a given document but also to identify the main words that contribute to the classification.