Search Results for author: Jaehyo Yoo

Found 5 papers, 2 papers with code

Lack of Fluency is Hurting Your Translation Model

no code implementations24 May 2022 Jaehyo Yoo, Jaewoo Kang

While the most train sentences are created via automatic techniques such as crawling and sentence-alignment methods, the test sentences are annotated with the consideration of fluency by human.

Machine Translation Sentence +1

Simple Questions Generate Named Entity Recognition Datasets

1 code implementation16 Dec 2021 Hyunjae Kim, Jaehyo Yoo, Seunghyun Yoon, Jinhyuk Lee, Jaewoo Kang

Recent named entity recognition (NER) models often rely on human-annotated datasets, requiring the significant engagement of professional knowledge on the target domain and entities.

Few-shot NER Named Entity Recognition +1

Transferability of Natural Language Inference to Biomedical Question Answering

2 code implementations1 Jul 2020 Minbyul Jeong, Mujeen Sung, Gangwoo Kim, Donghyeon Kim, Wonjin Yoon, Jaehyo Yoo, Jaewoo Kang

We observe that BioBERT trained on the NLI dataset obtains better performance on Yes/No (+5. 59%), Factoid (+0. 53%), List type (+13. 58%) questions compared to performance obtained in a previous challenge (BioASQ 7B Phase B).

Natural Language Inference Question Answering +2

SANVis: Visual Analytics for Understanding Self-Attention Networks

no code implementations13 Sep 2019 Cheonbok Park, Inyoup Na, Yongjang Jo, Sungbok Shin, Jaehyo Yoo, Bum Chul Kwon, Jian Zhao, Hyungjong Noh, Yeonsoo Lee, Jaegul Choo

Attention networks, a deep neural network architecture inspired by humans' attention mechanism, have seen significant success in image captioning, machine translation, and many other applications.

Image Captioning Machine Translation +2

Cannot find the paper you are looking for? You can Submit a new open access paper.