no code implementations • EMNLP (IWSLT) 2019 • Yuchen Yan, Dekai Wu, Serkan Kumyol
We introduce (1) a novel neural network structure for bilingual modeling of sentence pairs that allows efficient capturing of bilingual relationship via biconstituent composition, (2) the concept of neural network biparsing, which applies to not only machine translation (MT) but also to a variety of other bilingual research areas, and (3) the concept of a biparsing-backpropagation training loop, which we hypothesize that can efficiently learn complex biparse tree patterns.
no code implementations • WS 2016 • Meriem Beloucif, Markus Saers, Dekai Wu
In contrast, our proposed model not only improve translation by injecting a monolingual objective function to learn bilingual correlations during early training of the translation model, but also helps to learn more meaningful correlations with a relatively small data set, leading to a better alignment compared to either conventional ITG or traditional GIZA++ based approaches.
no code implementations • LREC 2014 • Chi-kiu Lo, Dekai Wu
In this paper we focus on (1) the IAA on the semantic role alignment task and (2) the overall IAA of HMEANT.
no code implementations • LREC 2014 • Karteek Addanki, Dekai Wu
We investigate novel challenges involved in comparing model performance on the task of improvising responses to hip hop lyrics and discuss observations regarding inter-evaluator agreement on judging improvisation quality.