no code implementations • NAACL (GeBNLP) 2022 • Jaimeen Ahn, Hwaran Lee, JinHwa Kim, Alice Oh
Knowledge distillation is widely used to transfer the language understanding of a large model to a smaller model. However, after knowledge distillation, it was found that the smaller model is more biased by gender compared to the source large model. This paper studies what causes gender bias to increase after the knowledge distillation process. Moreover, we suggest applying a variant of the mixup on knowledge distillation, which is used to increase generalizability during the distillation process, not for augmentation. By doing so, we can significantly reduce the gender bias amplification after knowledge distillation. We also conduct an experiment on the GLUE benchmark to demonstrate that even if the mixup is applied, it does not have a significant adverse effect on the model’s performance.
no code implementations • 21 Dec 2022 • Minbeom Kim, Hwanhee Lee, Kang Min Yoo, Joonsuk Park, Hwaran Lee, Kyomin Jung
In this work, we propose a novel critic decoding method for controlled language generation (CriticControl) that combines the strengths of reinforcement learning and weighted decoding.
no code implementations • 24 May 2022 • Miyoung Ko, Ingyu Seong, Hwaran Lee, Joonsuk Park, Minsuk Chang, Minjoon Seo
As the importance of identifying misinformation is increasing, many researchers focus on verifying textual claims on the web.
1 code implementation • Findings (NAACL) 2022 • Hwanhee Lee, Kang Min Yoo, Joonsuk Park, Hwaran Lee, Kyomin Jung
To this end, the latest approach is to train a factual consistency classifier on factually consistent and inconsistent summaries.
no code implementations • Findings (ACL) 2022 • Kyungjae Lee, Wookje Han, Seung-won Hwang, Hwaran Lee, Joonsuk Park, Sang-Woo Lee
To this end, we first propose a novel task--Continuously-updated QA (CuQA)--in which multiple large-scale updates are made to LMs, and the performance is measured with respect to the success in adding and updating knowledge while retaining existing knowledge.
no code implementations • 22 Sep 2020 • Hwaran Lee, Seokhwan Jo, HyungJun Kim, SangKeun Jung, Tae-Yoon Kim
To our best knowledge, our work is the first comprehensive study of a modularized E2E multi-domain dialog system that learning from each component to the entire dialog policy for task success.
1 code implementation • Findings (EMNLP) 2021 • Gi-Cheon Kang, Junseok Park, Hwaran Lee, Byoung-Tak Zhang, Jin-Hwa Kim
Visual dialog is a task of answering a sequence of questions grounded in an image using the previous dialog history as context.
3 code implementations • ACL 2019 • Hwaran Lee, Jinsik Lee, Tae-Yoon Kim
In goal-oriented dialog systems, belief trackers estimate the probability distribution of slot-values at every dialog turn.
Ranked #17 on
Multi-domain Dialogue State Tracking
on MULTIWOZ 2.0
1 code implementation • 6 Nov 2018 • Geonmin Kim, Hwaran Lee, Bo-Kyeong Kim, Sang-Hoon Oh, Soo-Young Lee
Many speech enhancement methods try to learn the relationship between noisy and clean speech, obtained using an acoustic room simulator.
no code implementations • 10 Jun 2016 • Hwaran Lee, Geonmin Kim, Ho-Gyeong Kim, Sang-Hoon Oh, Soo-Young Lee
Convolutional neural networks (CNNs) with convolutional and pooling operations along the frequency axis have been proposed to attain invariance to frequency shifts of features.
no code implementations • 2 May 2016 • Geonmin Kim, Hwaran Lee, Jisu Choi, Soo-Young Lee
In the HCRN, word representations are built from characters, thus resolving the data-sparsity problem, and inter-sentence dependency is embedded into sentence representation at the level of sentence composition.