no code implementations • WMT (EMNLP) 2020 • WonKee Lee, Jaehun Shin, Baikjin Jung, Jihyung Lee, Jong-Hyeok Lee
In our experiment, we implemented a noising module that simulates four types of post-editing errors, and we introduced this module into a Transformer-based multi-source APE model.
no code implementations • WMT (EMNLP) 2020 • Jihyung Lee, WonKee Lee, Jaehun Shin, Baikjin Jung, Young-Kil Kim, Jong-Hyeok Lee
This paper describes POSTECH-ETRI’s submission to WMT2020 for the shared task on automatic post-editing (APE) for 2 language pairs: English-German (En-De) and English-Chinese (En-Zh).
no code implementations • WMT (EMNLP) 2021 • Dam Heo, WonKee Lee, Baikjin Jung, Jong-Hyeok Lee
This paper describes POSTECH’s quality estimation systems submitted to Task 2 of the WMT 2021 quality estimation shared task: Word and Sentence-Level Post-editing Effort.
1 code implementation • ACL (IWSLT) 2021 • Aren Siekmeier, WonKee Lee, HongSeok Kwon, Jong-Hyeok Lee
We implemented a neural machine translation system that uses automatic sequence tagging to improve the quality of translation.
no code implementations • NAACL (NUSE) 2021 • Myungji Lee, HongSeok Kwon, Jaehun Shin, WonKee Lee, Baikjin Jung, Jong-Hyeok Lee
Moreover, in an attempt to improve the model architecture of previous studies, we replace LSTM with Transformer.
no code implementations • 17 May 2023 • Baikjin Jung, Myungji Lee, Jong-Hyeok Lee, Yunsu Kim
Automatic postediting (APE) is an automated process to refine a given machine translation (MT).
no code implementations • 8 Apr 2022 • WonKee Lee, Seong-Hwan Heo, Baikjin Jung, Jong-Hyeok Lee
Semi-supervised learning that leverages synthetic training data has been widely adopted in the field of Automatic post-editing (APE) to overcome the lack of human-annotated training data.
no code implementations • 24 Mar 2022 • Seong-Hwan Heo, WonKee Lee, Jong-Hyeok Lee
Zero-shot slot filling has received considerable attention to cope with the problem of limited available data for the target domain.
1 code implementation • EACL 2021 • WonKee Lee, Baikjin Jung, Jaehun Shin, Jong-Hyeok Lee
Automatic Post-Editing (APE) aims to correct errors in the output of a given machine translation (MT) system.
no code implementations • WS 2020 • Junsu Park, Hong-Seok Kwon, Jong-Hyeok Lee
In this paper, we propose a transfer learning based simultaneous translation model by extending BART.
no code implementations • 28 Oct 2019 • Jonggu Kim, Jong-Hyeok Lee
We propose two methods to capture relevant history information in a multi-turn dialogue by modeling inter-speaker relationship for spoken language understanding (SLU).
no code implementations • 15 Aug 2019 • WonKee Lee, Junsu Park, Byung-Hyun Go, Jong-Hyeok Lee
Recent approaches to the Automatic Post-Editing (APE) research have shown that better results are obtained by multi-source models, which jointly encode both source (src) and machine translation output (mt) to produce post-edited sentence (pe).
no code implementations • WS 2019 • WonKee Lee, Jaehun Shin, Jong-Hyeok Lee
This paper describes POSTECH{'}s submission to the WMT 2019 shared task on Automatic Post-Editing (APE).
1 code implementation • NAACL 2019 • Jonggu Kim, Jong-Hyeok Lee
To capture salient contextual information for spoken language understanding (SLU) of a dialogue, we propose time-aware models that automatically learn the latent time-decay function of the history without a manual time-decay function.
no code implementations • WS 2018 • Jaehun Shin, Jong-Hyeok Lee
This paper describes the POSTECH{'}s submission to the WMT 2018 shared task on Automatic Post-Editing (APE).
no code implementations • 23 May 2018 • Jonggu Kim, Doyeon Kong, Jong-Hyeok Lee
Using a sequence-to-sequence framework, many neural conversation models for chit-chat succeed in naturalness of the response.
no code implementations • 5 Jul 2017 • Jonggu Kim, Jong-Hyeok Lee
Most of neural approaches to relation classification have focused on finding short patterns that represent the semantic relation using Convolutional Neural Networks (CNNs) and those approaches have generally achieved better performances than using Recurrent Neural Networks (RNNs).
no code implementations • 8 Feb 2015 • Seung-Hoon Na, In-Su Kang, Jong-Hyeok Lee
Although these document characteristics should be differently handled, all previous methods of term frequency normalization have ignored these differences and have used a simplified length-driven approach which decreases the term frequency by only the length of a document, causing an unreasonable penalization.