Search Results for author: Pilsung Kang

Found 22 papers, 10 papers with code

Navigating the Path of Writing: Outline-guided Text Generation with Large Language Models

no code implementations22 Apr 2024 Yukyung Lee, Soonwon Ka, Bokyung Son, Pilsung Kang, Jaewook Kang

Large Language Models (LLMs) have significantly impacted the writing process, enabling collaborative content creation and enhancing productivity.

Text Generation

CheckEval: Robust Evaluation Framework using Large Language Model via Checklist

no code implementations27 Mar 2024 Yukyung Lee, Joonghoon Kim, Jaehee Kim, Hyowon Cho, Pilsung Kang

We introduce CheckEval, a novel evaluation framework using Large Language Models, addressing the challenges of ambiguity and inconsistency in current evaluation methods.

Language Modelling Large Language Model

Boosting Prompt-Based Self-Training With Mapping-Free Automatic Verbalizer for Multi-Class Classification

1 code implementation8 Dec 2023 Yookyung Kho, Jaehee Kim, Pilsung Kang

Recently, prompt-based fine-tuning has garnered considerable interest as a core technique for few-shot text classification task.

Few-Shot Text Classification Language Modelling +3

RAPID: Training-free Retrieval-based Log Anomaly Detection with PLM considering Token-level information

1 code implementation9 Nov 2023 Gunho No, Yukyung Lee, Hyeongwon Kang, Pilsung Kang

We introduce RAPID, a model that capitalizes on the inherent features of log data to enable anomaly detection without training delays, ensuring real-time capability.

Anomaly Detection Retrieval

Which is better? Exploring Prompting Strategy For LLM-based Metrics

1 code implementation7 Nov 2023 Joonghoon Kim, Saeran Park, Kiyoon Jeong, Sangmin Lee, Seung Hun Han, Jiyoon Lee, Pilsung Kang

With advanced Large Language Models (LLMs) such as GPT-4, evaluating the quality of Natural Language Generation (NLG) has become increasingly paramount.

Text Generation

Painsight: An Extendable Opinion Mining Framework for Detecting Pain Points Based on Online Customer Reviews

no code implementations3 Jun 2023 Yukyung Lee, Jaehee Kim, Doyoon Kim, Yookyung Kho, Younsun Kim, Pilsung Kang

As the e-commerce market continues to expand and online transactions proliferate, customer reviews have emerged as a critical element in shaping the purchasing decisions of prospective buyers.

Opinion Mining Sentiment Analysis +1

Multi-Task Self-Supervised Time-Series Representation Learning

no code implementations2 Mar 2023 Heejeong Choi, Pilsung Kang

We propose a new time-series representation learning method by combining the advantages of self-supervised tasks related to contextual, temporal, and transformation consistency.

Anomaly Detection Contrastive Learning +5

DSTEA: Improving Dialogue State Tracking via Entity Adaptive Pre-training

no code implementations8 Jul 2022 Yukyung Lee, Takyoung Kim, Hoonsang Yoon, Pilsung Kang, Junseong Bang, Misuk Kim

Dialogue State Tracking (DST) is critical for comprehensively interpreting user and system utterances, thereby forming the cornerstone of efficient dialogue systems.

Dialogue State Tracking named-entity-recognition +1

REVECA -- Rich Encoder-decoder framework for Video Event CAptioner

1 code implementation18 Jun 2022 Jaehyuk Heo, YongGi Jeong, Sunwoo Kim, Jaehee Kim, Pilsung Kang

We designed a Rich Encoder-decoder framework for Video Event CAptioner (REVECA) that utilizes spatial and temporal information from the video to generate a caption for the corresponding the event boundary.

Semantic Segmentation Video Understanding

AnoViT: Unsupervised Anomaly Detection and Localization with Vision Transformer-based Encoder-Decoder

1 code implementation21 Mar 2022 Yunseung Lee, Pilsung Kang

Therefore, current image anomaly detection methods have commonly used convolutional encoder-decoders to extract normal information through the local features of images.

Unsupervised Anomaly Detection

Mismatch between Multi-turn Dialogue and its Evaluation Metric in Dialogue State Tracking

no code implementations ACL 2022 Takyoung Kim, Hoonsang Yoon, Yukyung Lee, Pilsung Kang, Misuk Kim

Dialogue state tracking (DST) aims to extract essential information from multi-turn dialogue situations and take appropriate actions.

Dialogue State Tracking

LAnoBERT: System Log Anomaly Detection based on BERT Masked Language Model

no code implementations18 Nov 2021 Yukyung Lee, Jina Kim, Pilsung Kang

The system log generated in a computer system refers to large-scale data that are collected simultaneously and used as the basic data for determining errors, intrusion and abnormal behaviors.

Anomaly Detection Language Modelling +1

K-Wav2vec 2.0: Automatic Speech Recognition based on Joint Decoding of Graphemes and Syllables

2 code implementations11 Oct 2021 Jounghee Kim, Pilsung Kang

Wav2vec 2. 0 is an end-to-end framework of self-supervised learning for speech representation that is successful in automatic speech recognition (ASR), but most of the work on the topic has been developed with a single language: English.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

Oh My Mistake!: Toward Realistic Dialogue State Tracking including Turnback Utterances

no code implementations28 Aug 2021 Takyoung Kim, Yukyung Lee, Hoonsang Yoon, Pilsung Kang, Junseong Bang, Misuk Kim

The primary purpose of dialogue state tracking (DST), a critical component of an end-to-end conversational system, is to build a model that responds well to real-world situations.

Dialogue State Tracking

Back-Translated Task Adaptive Pretraining: Improving Accuracy and Robustness on Text Classification

no code implementations22 Jul 2021 Junghoon Lee, Jounghee Kim, Pilsung Kang

Language models (LMs) pretrained on a large text corpus and fine-tuned on a downstream text corpus and fine-tuned on a downstream task becomes a de facto training strategy for several natural language processing (NLP) tasks.

Language Modelling text-classification +2

Multi$^2$OIE: Multilingual Open Information Extraction Based on Multi-Head Attention with BERT

1 code implementation17 Sep 2020 Youngbin Ro, Yukyung Lee, Pilsung Kang

In this paper, we propose Multi$^2$OIE, which performs open information extraction (open IE) by combining BERT with multi-head attention.

Computational Efficiency Open Information Extraction

Paraphrase Thought: Sentence Embedding Module Imitating Human Language Recognition

1 code implementation16 Aug 2018 Myeongjun Jang, Pilsung Kang

However, because the performances of sentence classification and sentiment analysis can be enhanced by using a simple sentence representation method, it is not sufficient to claim that these models fully reflect the meanings of sentences based on good performances for such tasks.

Document Classification General Classification +8

Recurrent Neural Network-Based Semantic Variational Autoencoder for Sequence-to-Sequence Learning

1 code implementation9 Feb 2018 Myeongjun Jang, Seungwan Seo, Pilsung Kang

In this paper, we propose a new recurrent neural network (RNN)-based Seq2seq model, RNN semantic variational autoencoder (RNN--SVAE), to better capture the global latent information of a sequence of words.

Imputation Language Modelling +6

Sentiment Classification with Word Attention based on Weakly Supervised Learning with a Convolutional Neural Network

no code implementations28 Sep 2017 Gichang Lee, Jaeyun Jeong, Seungwan Seo, CzangYeob Kim, Pilsung Kang

In order to maximize the applicability of sentiment analysis results, it is necessary to not only classify the overall sentiment (positive/negative) of a given document but also to identify the main words that contribute to the classification.

Classification General Classification +4

Cannot find the paper you are looking for? You can Submit a new open access paper.