1 code implementation • COLING 2022 • Reinald Kim Amplayo, Kang Min Yoo, Sang-Woo Lee
Metadata attributes (e. g., user and product IDs from reviews) can be incorporated as additional inputs to neural-based NLP models, by expanding the architecture of the models to improve performance.
no code implementations • 23 May 2023 • Sungdong Kim, Sanghwan Bae, Jamin Shin, Soyoung Kang, Donghyun Kwak, Kang Min Yoo, Minjoon Seo
In this work, we propose a novel framework for alignment learning with almost no human labor and no dependency on pre-aligned LLMs.
no code implementations • 23 May 2023 • Jeonghoon Kim, Jung Hyun Lee, Sungdong Kim, Joonsuk Park, Kang Min Yoo, Se Jung Kwon, Dongsoo Lee
Such a strategy compresses the size of the model considerably, leading to a lower inference latency upon deployment and a reduction in the overall memory required.
no code implementations • 27 Jan 2023 • Hyunsoo Cho, Choonghyun Park, Junyeop Kim, Hyuhng Joon Kim, Kang Min Yoo, Sang-goo Lee
As the size of the pre-trained language model (PLM) continues to increase, numerous parameter-efficient transfer learning methods have been proposed recently to compensate for the tremendous cost of fine-tuning.
no code implementations • 21 Dec 2022 • Minbeom Kim, Hwanhee Lee, Kang Min Yoo, Joonsuk Park, Hwaran Lee, Kyomin Jung
In this work, we propose a novel critic decoding method for controlled language generation (CriticControl) that combines the strengths of reinforcement learning and weighted decoding.
no code implementations • 21 Dec 2022 • Hyunsoo Cho, Hyuhng Joon Kim, Junyeob Kim, Sang-Woo Lee, Sang-goo Lee, Kang Min Yoo, Taeuk Kim
Through in-context learning (ICL), large-scale language models are effective few-shot learners without additional model fine-tuning.
1 code implementation • 20 Oct 2022 • Hyunsoo Cho, Choonghyun Park, Jaewook Kang, Kang Min Yoo, Taeuk Kim, Sang-goo Lee
Out-of-distribution (OOD) detection aims to discern outliers from the intended data distribution, which is crucial to maintaining high reliability and a good user experience.
no code implementations • 8 Oct 2022 • Se Jung Kwon, Jeonghoon Kim, Jeongin Bae, Kang Min Yoo, Jin-Hwa Kim, Baeseong Park, Byeongwook Kim, Jung-Woo Ha, Nako Sung, Dongsoo Lee
To combine parameter-efficient adaptation and model compression, we propose AlphaTuning consisting of post-training quantization of the pre-trained language model and fine-tuning only some parts of quantized parameters for a target task.
1 code implementation • COLING 2022 • Xiaodong Gu, Zhaowei Zhang, Sang-Woo Lee, Kang Min Yoo, Jung-Woo Ha
While Transformers have had significant success in paragraph generation, they treat sentences as linear sequences of tokens and often neglect their hierarchical information.
no code implementations • 16 Jun 2022 • Hyuhng Joon Kim, Hyunsoo Cho, Junyeob Kim, Taeuk Kim, Kang Min Yoo, Sang-goo Lee
Large-scale pre-trained language models (PLMs) are well-known for being capable of solving a task simply by conditioning a few input-label pairs dubbed demonstrations on a prompt without being explicitly tuned for the desired downstream task.
no code implementations • 25 May 2022 • Kang Min Yoo, Junyeob Kim, Hyuhng Joon Kim, Hyunsoo Cho, Hwiyeol Jo, Sang-Woo Lee, Sang-goo Lee, Taeuk Kim
Despite recent explosion of interests in in-context learning, the underlying mechanism and the precise impact of the quality of demonstrations remain elusive.
1 code implementation • 25 May 2022 • Jin-Hwa Kim, Yunji Kim, Jiyoung Lee, Kang Min Yoo, Sang-Woo Lee
Based on a recent trend that multimodal generative evaluations exploit a vison-and-language pre-trained model, we propose the negative Gaussian cross-mutual information using the CLIP features as a unified metric, coined by Mutual Information Divergence (MID).
Ranked #1 on
Human Judgment Classification
on Pascal-50S
Hallucination Pair-wise Detection (1-ref)
Hallucination Pair-wise Detection (4-ref)
+5
no code implementations • 25 May 2022 • Gangwoo Kim, Sungdong Kim, Kang Min Yoo, Jaewoo Kang
In this paper, we introduce a novel framework, SIMSEEK, (Simulating information-Seeking conversation from unlabeled documents), and compare its two variants.
1 code implementation • Findings (NAACL) 2022 • Hwanhee Lee, Kang Min Yoo, Joonsuk Park, Hwaran Lee, Kyomin Jung
To this end, the latest approach is to train a factual consistency classifier on factually consistent and inconsistent summaries.
no code implementations • 4 Nov 2021 • Xiaodong Gu, Kang Min Yoo, Sang-Woo Lee
Pre-trained language models (PLM) have marked a huge leap in neural dialogue modeling.
no code implementations • 16 Sep 2021 • Reinald Kim Amplayo, Kang Min Yoo, Sang-Woo Lee
Metadata attributes (e. g., user and product IDs from reviews) can be incorporated as additional inputs to neural-based NLP models, by modifying the architecture of the models, in order to improve their performance.
2 code implementations • EMNLP 2021 • Boseop Kim, HyoungSeok Kim, Sang-Woo Lee, Gichang Lee, Donghyun Kwak, Dong Hyeon Jeon, Sunghyun Park, Sungju Kim, Seonhoon Kim, Dongpil Seo, Heungsub Lee, Minyoung Jeong, Sungjae Lee, Minsub Kim, Suk Hyun Ko, Seokhun Kim, Taeyong Park, Jinuk Kim, Soyoung Kang, Na-Hyeon Ryu, Kang Min Yoo, Minsuk Chang, Soobin Suh, Sookyo In, Jinseong Park, Kyungduk Kim, Hiun Kim, Jisu Jeong, Yong Goo Yeo, Donghoon Ham, Dongju Park, Min Young Lee, Jaewook Kang, Inho Kang, Jung-Woo Ha, WooMyoung Park, Nako Sung
GPT-3 shows remarkable in-context learning ability of large-scale language models (LMs) trained on hundreds of billion scale data.
1 code implementation • ACL 2021 • Taeuk Kim, Kang Min Yoo, Sang-goo Lee
In this work, we propose a contrastive learning method that utilizes self-guidance for improving the quality of BERT sentence representations.
1 code implementation • Findings (EMNLP) 2021 • Kang Min Yoo, Dongju Park, Jaewook Kang, Sang-Woo Lee, Woomyeong Park
Large-scale language models such as GPT-3 are excellent few-shot learners, allowing them to be controlled via natural text prompts.
1 code implementation • 15 Apr 2021 • Raphael Shu, Kang Min Yoo, Jung-Woo Ha
Results show that the reward optimization with BLEURT is able to increase the metric scores by a large margin, in contrast to limited gain when training with smoothed BLEU.
1 code implementation • 3 Dec 2020 • Xiaodong Gu, Kang Min Yoo, Jung-Woo Ha
Recent advances in pre-trained language models have significantly improved neural response generation.
1 code implementation • EMNLP 2020 • Kang Min Yoo, Hanbit Lee, Franck Dernoncourt, Trung Bui, Walter Chang, Sang-goo Lee
Recent works have shown that generative data augmentation, where synthetic samples generated from deep generative models complement the training dataset, benefit NLP tasks.
3 code implementations • IJCNLP 2019 • Kang Min Yoo, Taeuk Kim, Sang-goo Lee
We propose a simple yet effective approach for improving Korean word representations using additional linguistic annotation (i. e. Hanja).
no code implementations • 7 Sep 2018 • Kang Min Yoo, Youhyun Shin, Sang-goo Lee
Data scarcity is one of the main obstacles of domain adaptation in spoken language understanding (SLU) due to the high cost of creating manually tagged SLU datasets.
no code implementations • 2 Dec 2017 • Kang Min Yoo, Youhyun Shin, Sang-goo Lee
Sentence representation models trained only on language could potentially suffer from the grounding problem.
1 code implementation • 10 Jul 2017 • Jihun Choi, Kang Min Yoo, Sang-goo Lee
For years, recursive neural networks (RvNNs) have been shown to be suitable for representing text into fixed-length vectors and achieved good performance on several natural language processing tasks.
Ranked #62 on
Natural Language Inference
on SNLI