no code implementations • LREC 2020 • Murathan Kurfal{\i}, Robert {\"O}stling, Johan Sjons, Mats Wir{\'e}n
We present a new set of 96 Swedish multi-word expressions annotated with degree of (non-)compositionality.
no code implementations • WS 2019 • Murathan Kurfal{\i}, Robert {\"O}stling
Automatically classifying the relation between sentences in a discourse is a challenging task, in particular when there is no overt expression of the relation.
no code implementations • WS 2019 • Murathan Kurfal{\i}, Robert {\"O}stling
We present a very simple method for parallel text cleaning of low-resource languages, based on projection of word embeddings trained on large monolingual corpora in high-resource languages.
no code implementations • WS 2017 • Robert {\"O}stling, Gintare Grigonyte
We present a very simple model for text quality assessment based on a deep convolutional neural network, where the only supervision required is one corpus of user-generated text of varying quality, and one contrasting text corpus of consistently high quality.
no code implementations • WS 2017 • Johannes Bjerva, Gintar{\.e} Grigonyt{\.e}, Robert {\"O}stling, Barbara Plank
We present the RUG-SU team{'}s submission at the Native Language Identification Shared Task 2017.
no code implementations • SEMEVAL 2017 • Johannes Bjerva, Robert {\"O}stling
Shared Task 1 at SemEval-2017 deals with assessing the semantic similarity between sentences, either in the same or in different languages.
no code implementations • EACL 2017 • Robert {\"O}stling, J{\"o}rg Tiedemann
Most existing models for multilingual natural language processing (NLP) treat language as a discrete category, and make predictions for either one language or the other.
no code implementations • WS 2016 • Robert {\"O}stling
One of the purposes of the VarDial workshop series is to encourage research into NLP methods that treat human languages as a continuum, by designing models that exploit the similarities between languages and variants.
no code implementations • COLING 2016 • Robert {\"O}stling
Current methods for word alignment require considerable amounts of parallel text to deliver accurate results, a requirement which is met only for a small minority of the world{'}s approximately 7, 000 languages.