no code implementations • 7 Sep 2018 • Yanghoon Kim, Hwanhee Lee, Joongbo Shin, Kyomin Jung
Previous NQG models suffer from a problem that a significant proportion of the generated questions include words in the question target, resulting in the generation of unintended questions.
no code implementations • 16 May 2019 • Joongbo Shin, Yoonhyung Lee, Kyomin Jung
Recent studies have tried to use bidirectional LMs (biLMs) instead of conventional unidirectional LMs (uniLMs) for rescoring the $N$-best list decoded from the acoustic model.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +3
1 code implementation • 6 Sep 2022 • Janghoon Han, Joongbo Shin, Hosung Song, Hyunjik Jo, Gyeonghun Kim, Yireun Kim, Stanley Jungkyu Choi
In the experiment, we investigate the effect of weighted negative sampling, post-training, and style transfer.
no code implementations • 15 Aug 2023 • Nakyeong Yang, Minsung Kim, Seunghyun Yoon, Joongbo Shin, Kyomin Jung
With the explosion of multimedia content in recent years, Video Corpus Moment Retrieval (VCMR), which aims to detect a video moment that matches a given natural language query from multiple videos, has become a critical problem.
3 code implementations • NAACL 2018 • Seunghyun Yoon, Joongbo Shin, Kyomin Jung
In this paper, we propose a novel end-to-end neural architecture for ranking candidate answers, that adapts a hierarchical recurrent neural network and a latent topic clustering module.
Ranked #1 on Answer Selection on Ubuntu Dialogue (v1, Ranking)
1 code implementation • NAACL 2021 • Hwanhee Lee, Seunghyun Yoon, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Joongbo Shin, Kyomin Jung
To evaluate our metric, we create high-quality human judgments of correctness on two GenQA datasets.
2 code implementations • 17 Nov 2018 • Seunghyun Yoon, Kunwoo Park, Joongbo Shin, Hongjun Lim, Seungpil Won, Meeyoung Cha, Kyomin Jung
Some news headlines mislead readers with overrated or false information, and identifying them in advance will better assist readers in choosing proper news stories to consume.
1 code implementation • 29 Apr 2022 • Joel Jang, Seonghyeon Ye, Changho Lee, Sohee Yang, Joongbo Shin, Janghoon Han, Gyeonghun Kim, Minjoon Seo
Language Models (LMs) become outdated as the world changes; they often fail to perform tasks requiring recent factual information which was absent or different during training, a phenomenon called temporal misalignment.
2 code implementations • ICLR 2022 • Joel Jang, Seonghyeon Ye, Sohee Yang, Joongbo Shin, Janghoon Han, Gyeonghun Kim, Stanley Jungkyu Choi, Minjoon Seo
By highlighting the critical causes of knowledge forgetting, we show that CKL is a challenging and important problem that helps us better understand and train ever-changing LMs.
1 code implementation • ACL 2020 • Joongbo Shin, Yoonhyung Lee, Seunghyun Yoon, Kyomin Jung
Even though BERT achieves successful performance improvements in various supervised learning tasks, applying BERT for unsupervised tasks still holds a limitation that it requires repetitive inference for computing contextual language representations.
1 code implementation • 6 Oct 2022 • Seonghyeon Ye, Doyoung Kim, Joel Jang, Joongbo Shin, Minjoon Seo
Meta-training, which fine-tunes the language model (LM) on various downstream tasks by maximizing the likelihood of the target label given the task instruction and input instance, has improved the zero-shot task generalization performance.
Ranked #2 on Question Answering on StoryCloze
1 code implementation • ICLR 2021 • Yoonhyung Lee, Joongbo Shin, Kyomin Jung
Although early text-to-speech (TTS) models such as Tacotron 2 have succeeded in generating human-like speech, their autoregressive (AR) architectures have a limitation that they require a lot of time to generate a mel-spectrogram consisting of hundreds of steps.