Search Results for author: Joongbo Shin

Found 8 papers, 6 papers with code

Towards Continual Knowledge Learning of Language Models

2 code implementations7 Oct 2021 Joel Jang, Seonghyeon Ye, Sohee Yang, Joongbo Shin, Janghoon Han, Gyeonghun Kim, Stanley Jungkyu Choi, Minjoon Seo

By highlighting the critical causes of knowledge forgetting, we show that CKL is a challenging and important problem that helps us better understand and train ever-changing LMs.

Continual Learning Fact Checking +1

Bidirectional Variational Inference for Non-Autoregressive Text-to-Speech

1 code implementation ICLR 2021 Yoonhyung Lee, Joongbo Shin, Kyomin Jung

Although early text-to-speech (TTS) models such as Tacotron 2 have succeeded in generating human-like speech, their autoregressive (AR) architectures have a limitation that they require a lot of time to generate a mel-spectrogram consisting of hundreds of steps.

Speech Quality Variational Inference

Fast and Accurate Deep Bidirectional Language Representations for Unsupervised Learning

1 code implementation ACL 2020 Joongbo Shin, Yoonhyung Lee, Seunghyun Yoon, Kyomin Jung

Even though BERT achieves successful performance improvements in various supervised learning tasks, applying BERT for unsupervised tasks still holds a limitation that it requires repetitive inference for computing contextual language representations.

Language Modelling Semantic Similarity +1

Effective Sentence Scoring Method using Bidirectional Language Model for Speech Recognition

no code implementations16 May 2019 Joongbo Shin, Yoonhyung Lee, Kyomin Jung

Recent studies have tried to use bidirectional LMs (biLMs) instead of conventional unidirectional LMs (uniLMs) for rescoring the $N$-best list decoded from the acoustic model.

Automatic Speech Recognition

Detecting Incongruity Between News Headline and Body Text via a Deep Hierarchical Encoder

2 code implementations17 Nov 2018 Seunghyun Yoon, Kunwoo Park, Joongbo Shin, Hongjun Lim, Seungpil Won, Meeyoung Cha, Kyomin Jung

Some news headlines mislead readers with overrated or false information, and identifying them in advance will better assist readers in choosing proper news stories to consume.

Data Augmentation Fake News Detection +2

Improving Neural Question Generation using Answer Separation

no code implementations7 Sep 2018 Yanghoon Kim, Hwanhee Lee, Joongbo Shin, Kyomin Jung

Previous NQG models suffer from a problem that a significant proportion of the generated questions include words in the question target, resulting in the generation of unintended questions.

Question Generation

Learning to Rank Question-Answer Pairs using Hierarchical Recurrent Encoder with Latent Topic Clustering

3 code implementations NAACL 2018 Seunghyun Yoon, Joongbo Shin, Kyomin Jung

In this paper, we propose a novel end-to-end neural architecture for ranking candidate answers, that adapts a hierarchical recurrent neural network and a latent topic clustering module.

Answer Selection Learning-To-Rank

Cannot find the paper you are looking for? You can Submit a new open access paper.