Search Results for author: Yunsu Kim

Found 31 papers, 13 papers with code

Leveraging the Interplay Between Syntactic and Acoustic Cues for Optimizing Korean TTS Pause Formation

no code implementations3 Apr 2024 Yejin Jeon, Yunsu Kim, Gary Geunbae Lee

Contemporary neural speech synthesis models have indeed demonstrated remarkable proficiency in synthetic speech generation as they have attained a level of quality comparable to that of human-produced speech.

Speech Synthesis

Evalverse: Unified and Accessible Library for Large Language Model Evaluation

1 code implementation1 Apr 2024 Jihoo Kim, Wonho Song, Dahyun Kim, Yunsu Kim, Yungi Kim, Chanjun Park

This paper introduces Evalverse, a novel library that streamlines the evaluation of Large Language Models (LLMs) by unifying disparate evaluation tools into a single, user-friendly framework.

Language Modelling Large Language Model

Explainable Multi-hop Question Generation: An End-to-End Approach without Intermediate Question Labeling

1 code implementation31 Mar 2024 Seonjeong Hwang, Yunsu Kim, Gary Geunbae Lee

We also prove that our model logically and incrementally increases the complexity of questions, and the generated multi-hop questions are also beneficial for training question answering models.

Question Answering Question Generation +2

sDPO: Don't Use Your Data All at Once

no code implementations28 Mar 2024 Dahyun Kim, Yungi Kim, Wonho Song, Hyeonwoo Kim, Yunsu Kim, Sanghoon Kim, Chanjun Park

As development of large language models (LLM) progresses, aligning them with human preferences has become increasingly important.

Denoising Table-Text Retrieval for Open-Domain Question Answering

1 code implementation26 Mar 2024 Deokhyung Kang, Baikjin Jung, Yunsu Kim, Gary Geunbae Lee

Previous studies in table-text open-domain question answering have two common challenges: firstly, their retrievers can be affected by false-positive labels in training datasets; secondly, they may struggle to provide appropriate evidence for questions that require reasoning across the table.

Denoising Open-Domain Question Answering +2

Autoregressive Score Generation for Multi-trait Essay Scoring

1 code implementation13 Mar 2024 Heejin Do, Yunsu Kim, Gary Geunbae Lee

Recently, encoder-only pre-trained models such as BERT have been successfully applied in automated essay scoring (AES) to predict a single overall score.

Automated Essay Scoring

Optimizing Two-Pass Cross-Lingual Transfer Learning: Phoneme Recognition and Phoneme to Grapheme Translation

no code implementations6 Dec 2023 Wonjun Lee, Gary Geunbae Lee, Yunsu Kim

This research contributes to the advancements of two-pass ASR systems in low-resource languages, offering the potential for improved cross-lingual transfer learning.

Cross-Lingual Transfer speech-recognition +2

Score-balanced Loss for Multi-aspect Pronunciation Assessment

1 code implementation26 May 2023 Heejin Do, Yunsu Kim, Gary Geunbae Lee

With rapid technological growth, automatic pronunciation assessment has transitioned toward systems that evaluate pronunciation in various aspects, such as fluency and stress.

Prompt- and Trait Relation-aware Cross-prompt Essay Trait Scoring

1 code implementation26 May 2023 Heejin Do, Yunsu Kim, Gary Geunbae Lee

Thus, predicting various trait scores of unseen-prompt essays (called cross-prompt essay trait scoring) is a remaining challenge of AES.

Automated Essay Scoring Relation

Hierarchical Pronunciation Assessment with Multi-Aspect Attention

1 code implementation15 Nov 2022 Heejin Do, Yunsu Kim, Gary Geunbae Lee

In this paper, we propose a Hierarchical Pronunciation Assessment with Multi-aspect Attention (HiPAMA) model, which hierarchically represents the granularity levels to directly capture their linguistic structures and introduces multi-aspect attention that reflects associations across aspects at the same level to create more connotative representations.

Multi-Task Learning Phone-level pronunciation scoring +2

Multi-Type Conversational Question-Answer Generation with Closed-ended and Unanswerable Questions

no code implementations24 Oct 2022 Seonjeong Hwang, Yunsu Kim, Gary Geunbae Lee

Conversational question answering (CQA) facilitates an incremental and interactive understanding of a given context, but building a CQA system is difficult for many domains due to the problem of data scarcity.

Answer Generation Conversational Question Answering +1

When and Why is Unsupervised Neural Machine Translation Useless?

no code implementations EAMT 2020 Yunsu Kim, Miguel Graça, Hermann Ney

This paper studies the practicality of the current state-of-the-art unsupervised methods in neural machine translation (NMT).

Machine Translation NMT +2

When and Why is Document-level Context Useful in Neural Machine Translation?

1 code implementation WS 2019 Yunsu Kim, Duc Thanh Tran, Hermann Ney

Document-level context has received lots of attention for compensating neural machine translation (NMT) of isolated sentences.

Machine Translation NMT +1

Pivot-based Transfer Learning for Neural Machine Translation between Non-English Languages

no code implementations IJCNLP 2019 Yunsu Kim, Petre Petrov, Pavel Petrushkov, Shahram Khadivi, Hermann Ney

We present effective pre-training strategies for neural machine translation (NMT) using parallel corpora involving a pivot language, i. e., source-pivot and pivot-target, leading to a significant improvement in source-target translation.

Machine Translation NMT +3

The RWTH Aachen University Machine Translation Systems for WMT 2019

no code implementations WS 2019 Jan Rosendahl, Christian Herold, Yunsu Kim, Miguel Gra{\c{c}}a, Weiyue Wang, Parnia Bahar, Yingbo Gao, Hermann Ney

For the De-En task, none of the tested methods gave a significant improvement over last years winning system and we end up with the same performance, resulting in 39. 6{\%} BLEU on newstest2019.

Attribute Language Modelling +3

Generalizing Back-Translation in Neural Machine Translation

no code implementations WS 2019 Miguel Graça, Yunsu Kim, Julian Schamper, Shahram Khadivi, Hermann Ney

Back-translation - data augmentation by translating target monolingual data - is a crucial component in modern neural machine translation (NMT).

Data Augmentation Machine Translation +3

Effective Cross-lingual Transfer of Neural Machine Translation Models without Shared Vocabularies

1 code implementation ACL 2019 Yunsu Kim, Yingbo Gao, Hermann Ney

Transfer learning or multilingual model is essential for low-resource neural machine translation (NMT), but the applicability is limited to cognate languages by sharing their vocabularies.

Cross-Lingual Transfer Low-Resource Neural Machine Translation +3

A Comparative Study on Vocabulary Reduction for Phrase Table Smoothing

no code implementations WS 2016 Yunsu Kim, Andreas Guta, Joern Wuebker, Hermann Ney

This work systematically analyzes the smoothing effect of vocabulary reduction for phrase translation models.


Improving Unsupervised Word-by-Word Translation with Language Model and Denoising Autoencoder

no code implementations EMNLP 2018 Yunsu Kim, Jiahui Geng, Hermann Ney

Unsupervised learning of cross-lingual word embedding offers elegant matching of words across languages, but has fundamental limitations in translating sentences.

Denoising Language Modelling +2

Unsupervised Training for Large Vocabulary Translation Using Sparse Lexicon and Word Classes

no code implementations EACL 2017 Yunsu Kim, Julian Schamper, Hermann Ney

We address for the first time unsupervised training for a translation task with hundreds of thousands of vocabulary words.


The RWTH Aachen University English-German and German-English Unsupervised Neural Machine Translation Systems for WMT 2018

no code implementations WS 2018 Miguel Gra{\c{c}}a, Yunsu Kim, Julian Schamper, Jiahui Geng, Hermann Ney

This paper describes the unsupervised neural machine translation (NMT) systems of the RWTH Aachen University developed for the English ↔ German news translation task of the \textit{EMNLP 2018 Third Conference on Machine Translation} (WMT 2018).

Machine Translation NMT +2

The RWTH Aachen University Supervised Machine Translation Systems for WMT 2018

1 code implementation WS 2018 Julian Schamper, Jan Rosendahl, Parnia Bahar, Yunsu Kim, Arne Nix, Hermann Ney

In total we improve by 6. 8{\%} BLEU over our last year{'}s submission and by 4. 8{\%} BLEU over the winning system of the 2017 German→English task.

Machine Translation Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.