no code implementations • RANLP 2019 • Ahmet {\"U}st{\"u}n, Gosse Bouma, Gertjan van Noord
Cross-lingual word embedding models learn a shared vector space for two or more languages so that words with similar meaning are represented by similar vectors regardless of their language.
no code implementations • WS 2019 • Ahmet {\"U}st{\"u}n, Rob van der Goot, Gosse Bouma, Gertjan van Noord
This paper describes our submission to SIGMORPHON 2019 Task 2: Morphological analysis and lemmatization in context.
no code implementations • WS 2018 • Ahmet {\"U}st{\"u}n, Murathan Kurfal{\i}, Burcu Can
The results show that morpheme-based models are better at learning word representations of morphologically complex languages compared to character-based and character n-gram level models since the morphemes help to incorporate more syntactic knowledge in learning, that makes morpheme-based models better at syntactic tasks.