Search Results for author: Insoo Oh

Found 4 papers, 2 papers with code

Netmarble AI Center’s WMT21 Automatic Post-Editing Shared Task Submission

no code implementations WMT (EMNLP) 2021 Shinhyeok Oh, Sion Jang, Hu Xu, Shounan An, Insoo Oh

As experimental results show, our APE system significantly improves the translations of provided MT results by -2. 848 and +3. 74 on the development dataset in terms of TER and BLEU, respectively.

Automatic Post-Editing Multi-Task Learning +1

RWEN-TTS: Relation-aware Word Encoding Network for Natural Text-to-Speech Synthesis

1 code implementation15 Dec 2022 Shinhyeok Oh, HyeongRae Noh, Yoonseok Hong, Insoo Oh

With the advent of deep learning, a huge number of text-to-speech (TTS) models which produce human-like speech have emerged.

Relation Speech Synthesis +1

Netmarble AI Center's WMT21 Automatic Post-Editing Shared Task Submission

no code implementations14 Sep 2021 Shinhyeok Oh, Sion Jang, Hu Xu, Shounan An, Insoo Oh

As experimental results show, our APE system significantly improves the translations of provided MT results by -2. 848 and +3. 74 on the development dataset in terms of TER and BLEU, respectively.

Automatic Post-Editing Multi-Task Learning +1

Mel-spectrogram augmentation for sequence to sequence voice conversion

2 code implementations6 Jan 2020 Yeongtae Hwang, Hyemin Cho, Hongsun Yang, Dong-Ok Won, Insoo Oh, Seong-Whan Lee

In addition, we proposed new policies (i. e., frequency warping, loudness and time length control) for more data variations.

Voice Conversion

Cannot find the paper you are looking for? You can Submit a new open access paper.