no code implementations • 20 Dec 2024 • Gyutae Park, Ingeol Baek, Byeongjeong Kim, Joongbo Shin, Hwanhee Lee
Dialogue intent classification aims to identify the underlying purpose or intent of a user's input in a conversation.
1 code implementation • NLP4ConvAI Association for Computational Linguistics Workshop 2024 • Janghoon Han, Dongkyu Lee, Joongbo Shin, Hyunkyung Bae, Jeesoo Bang, SeongHwan Kim, Stanley Jungkyu Choi, and Honglak Lee.
Recent studies have demonstrated significant improvements in selection tasks, and a considerable portion of this success is attributed to incorporating informative negative samples during training.
Ranked #1 on Conversational Response Selection on E-commerce
1 code implementation • 13 Jun 2024 • Janghoon Han, Changho Lee, Joongbo Shin, Stanley Jungkyu Choi, Honglak Lee, Kynghoon Bae
Subsequently, we assess the performance on unseen tasks in a language different from the one used for training.
1 code implementation • 15 Aug 2023 • Nakyeong Yang, Minsung Kim, Seunghyun Yoon, Joongbo Shin, Kyomin Jung
However, the existing VMR framework evaluates video moment retrieval performance, assuming that a video is given, which may not reveal whether the models exhibit overconfidence in the falsely given video.
1 code implementation • 6 Oct 2022 • Seonghyeon Ye, Doyoung Kim, Joel Jang, Joongbo Shin, Minjoon Seo
Meta-training, which fine-tunes the language model (LM) on various downstream tasks by maximizing the likelihood of the target label given the task instruction and input instance, has improved the zero-shot task generalization performance.
Ranked #2 on Question Answering on StoryCloze
1 code implementation • 6 Sep 2022 • Janghoon Han, Joongbo Shin, Hosung Song, Hyunjik Jo, Gyeonghun Kim, Yireun Kim, Stanley Jungkyu Choi
In the experiment, we investigate the effect of weighted negative sampling, post-training, and style transfer.
1 code implementation • 29 Apr 2022 • Joel Jang, Seonghyeon Ye, Changho Lee, Sohee Yang, Joongbo Shin, Janghoon Han, Gyeonghun Kim, Minjoon Seo
Language Models (LMs) become outdated as the world changes; they often fail to perform tasks requiring recent factual information which was absent or different during training, a phenomenon called temporal misalignment.
2 code implementations • ICLR 2022 • Joel Jang, Seonghyeon Ye, Sohee Yang, Joongbo Shin, Janghoon Han, Gyeonghun Kim, Stanley Jungkyu Choi, Minjoon Seo
By highlighting the critical causes of knowledge forgetting, we show that CKL is a challenging and important problem that helps us better understand and train ever-changing LMs.
1 code implementation • ICLR 2021 • Yoonhyung Lee, Joongbo Shin, Kyomin Jung
Although early text-to-speech (TTS) models such as Tacotron 2 have succeeded in generating human-like speech, their autoregressive (AR) architectures have a limitation that they require a lot of time to generate a mel-spectrogram consisting of hundreds of steps.
1 code implementation • NAACL 2021 • Hwanhee Lee, Seunghyun Yoon, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Joongbo Shin, Kyomin Jung
To evaluate our metric, we create high-quality human judgments of correctness on two GenQA datasets.
1 code implementation • ACL 2020 • Joongbo Shin, Yoonhyung Lee, Seunghyun Yoon, Kyomin Jung
Even though BERT achieves successful performance improvements in various supervised learning tasks, applying BERT for unsupervised tasks still holds a limitation that it requires repetitive inference for computing contextual language representations.
no code implementations • 16 May 2019 • Joongbo Shin, Yoonhyung Lee, Kyomin Jung
Recent studies have tried to use bidirectional LMs (biLMs) instead of conventional unidirectional LMs (uniLMs) for rescoring the $N$-best list decoded from the acoustic model.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +4
2 code implementations • 17 Nov 2018 • Seunghyun Yoon, Kunwoo Park, Joongbo Shin, Hongjun Lim, Seungpil Won, Meeyoung Cha, Kyomin Jung
Some news headlines mislead readers with overrated or false information, and identifying them in advance will better assist readers in choosing proper news stories to consume.
1 code implementation • 7 Sep 2018 • Yanghoon Kim, Hwanhee Lee, Joongbo Shin, Kyomin Jung
Previous NQG models suffer from a problem that a significant proportion of the generated questions include words in the question target, resulting in the generation of unintended questions.
3 code implementations • NAACL 2018 • Seunghyun Yoon, Joongbo Shin, Kyomin Jung
In this paper, we propose a novel end-to-end neural architecture for ranking candidate answers, that adapts a hierarchical recurrent neural network and a latent topic clustering module.
Ranked #1 on Answer Selection on Ubuntu Dialogue (v1, Ranking)