18 papers with code • 0 benchmarks • 1 datasets
Morphological tagging is the task of assigning labels to a sequence of tokens that describe them morphologically. As compared to Part-of-speech tagging, morphological tagging also considers morphological features, such as case, gender or the tense of verbs.
These leaderboards are used to track progress in Morphological Tagging
In this paper, we investigate models that use recurrent neural networks with sentence-level context for initial character and word-based representations.
We introduce a constituency parser based on a bi-LSTM encoder adapted from recent work (Cross and Huang, 2016b; Kiperwasser and Goldberg, 2016), which can incorporate a lower level character biLSTM (Ballesteros et al., 2015; Plank et al., 2016).
Neural machine translation (MT) models obtain state-of-the-art performance while maintaining a simple, end-to-end architecture.
We present a general-purpose tagger based on convolutional neural networks (CNN), used for both composing word vectors and encoding context information.
Explaining Character-Aware Neural Networks for Word-Level Prediction: Do They Discover Linguistic Rules?
In this paper, we investigate which character-level patterns neural networks learn and if those patterns coincide with manually-defined word segmentations and annotations.
Free as in Free Word Order: An Energy Based Model for Word Segmentation and Morphological Tagging in Sanskrit
The configurational information in sentences of a free word order language such as Sanskrit is of limited use.
Neural morphological tagging has been regarded as an extension to POS tagging task, treating each morphological tag as a monolithic label and ignoring its internal structure.
Multi Task Deep Morphological Analyzer: Context Aware Joint Morphological Tagging and Lemma Prediction
The ambiguities introduced by the recombination of morphemes constructing several possible inflections for a word makes the prediction of syntactic traits in Morphologically Rich Languages (MRLs) a notoriously complicated task.
In this work, we intrinsically and extrinsically evaluate and compare existing word embedding models for the Armenian language.