Search Results for author: Joongbo Shin

Found 12 papers, 9 papers with code

Is it Really Negative? Evaluating Natural Language Video Localization Performance on Multiple Reliable Videos Pool

no code implementations15 Aug 2023 Nakyeong Yang, Minsung Kim, Seunghyun Yoon, Joongbo Shin, Kyomin Jung

With the explosion of multimedia content in recent years, Video Corpus Moment Retrieval (VCMR), which aims to detect a video moment that matches a given natural language query from multiple videos, has become a critical problem.

Contrastive Learning Moment Retrieval +3

Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners

1 code implementation6 Oct 2022 Seonghyeon Ye, Doyoung Kim, Joel Jang, Joongbo Shin, Minjoon Seo

Meta-training, which fine-tunes the language model (LM) on various downstream tasks by maximizing the likelihood of the target label given the task instruction and input instance, has improved the zero-shot task generalization performance.

Common Sense Reasoning Coreference Resolution +6

TemporalWiki: A Lifelong Benchmark for Training and Evaluating Ever-Evolving Language Models

1 code implementation29 Apr 2022 Joel Jang, Seonghyeon Ye, Changho Lee, Sohee Yang, Joongbo Shin, Janghoon Han, Gyeonghun Kim, Minjoon Seo

Language Models (LMs) become outdated as the world changes; they often fail to perform tasks requiring recent factual information which was absent or different during training, a phenomenon called temporal misalignment.

Continual Learning

Towards Continual Knowledge Learning of Language Models

2 code implementations ICLR 2022 Joel Jang, Seonghyeon Ye, Sohee Yang, Joongbo Shin, Janghoon Han, Gyeonghun Kim, Stanley Jungkyu Choi, Minjoon Seo

By highlighting the critical causes of knowledge forgetting, we show that CKL is a challenging and important problem that helps us better understand and train ever-changing LMs.

Continual Learning Fact Checking +2

Bidirectional Variational Inference for Non-Autoregressive Text-to-Speech

1 code implementation ICLR 2021 Yoonhyung Lee, Joongbo Shin, Kyomin Jung

Although early text-to-speech (TTS) models such as Tacotron 2 have succeeded in generating human-like speech, their autoregressive (AR) architectures have a limitation that they require a lot of time to generate a mel-spectrogram consisting of hundreds of steps.

Variational Inference

Fast and Accurate Deep Bidirectional Language Representations for Unsupervised Learning

1 code implementation ACL 2020 Joongbo Shin, Yoonhyung Lee, Seunghyun Yoon, Kyomin Jung

Even though BERT achieves successful performance improvements in various supervised learning tasks, applying BERT for unsupervised tasks still holds a limitation that it requires repetitive inference for computing contextual language representations.

Language Modelling Semantic Similarity +1

Effective Sentence Scoring Method using Bidirectional Language Model for Speech Recognition

no code implementations16 May 2019 Joongbo Shin, Yoonhyung Lee, Kyomin Jung

Recent studies have tried to use bidirectional LMs (biLMs) instead of conventional unidirectional LMs (uniLMs) for rescoring the $N$-best list decoded from the acoustic model.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

Detecting Incongruity Between News Headline and Body Text via a Deep Hierarchical Encoder

2 code implementations17 Nov 2018 Seunghyun Yoon, Kunwoo Park, Joongbo Shin, Hongjun Lim, Seungpil Won, Meeyoung Cha, Kyomin Jung

Some news headlines mislead readers with overrated or false information, and identifying them in advance will better assist readers in choosing proper news stories to consume.

Data Augmentation Fake News Detection +2

Improving Neural Question Generation using Answer Separation

no code implementations7 Sep 2018 Yanghoon Kim, Hwanhee Lee, Joongbo Shin, Kyomin Jung

Previous NQG models suffer from a problem that a significant proportion of the generated questions include words in the question target, resulting in the generation of unintended questions.

Question Generation Question-Generation

Learning to Rank Question-Answer Pairs using Hierarchical Recurrent Encoder with Latent Topic Clustering

3 code implementations NAACL 2018 Seunghyun Yoon, Joongbo Shin, Kyomin Jung

In this paper, we propose a novel end-to-end neural architecture for ranking candidate answers, that adapts a hierarchical recurrent neural network and a latent topic clustering module.

Answer Selection Clustering +1

Cannot find the paper you are looking for? You can Submit a new open access paper.