Morphological Tagging
26 papers with code • 0 benchmarks • 4 datasets
Morphological tagging is the task of assigning labels to a sequence of tokens that describe them morphologically. As compared to Part-of-speech tagging, morphological tagging also considers morphological features, such as case, gender or the tense of verbs.
Benchmarks
These leaderboards are used to track progress in Morphological Tagging
Datasets
Most implemented papers
Morphosyntactic Tagging with a Meta-BiLSTM Model over Context Sensitive Token Encodings
In this paper, we investigate models that use recurrent neural networks with sentence-level context for initial character and word-based representations.
AlephBERT:A Hebrew Large Pre-Trained Language Model to Start-off your Hebrew NLP Application With
Second, there are no accepted tasks and benchmarks to evaluate the progress of Hebrew PLMs on.
Sentence Embedding Models for Ancient Greek Using Multilingual Knowledge Distillation
In this work, we use a multilingual knowledge distillation approach to train BERT models to produce sentence embeddings for Ancient Greek text.
Neural Morphological Tagging from Characters for Morphologically Rich Languages
We systematically explore a variety of neural architectures (DNN, CNN, CNNHighway, LSTM, BLSTM) to obtain character-based word vectors combined with bidirectional LSTMs to model across-word context in an end-to-end setting.
Multilingual Lexicalized Constituency Parsing with Word-Level Auxiliary Tasks
We introduce a constituency parser based on a bi-LSTM encoder adapted from recent work (Cross and Huang, 2016b; Kiperwasser and Goldberg, 2016), which can incorporate a lower level character biLSTM (Ballesteros et al., 2015; Plank et al., 2016).
What do Neural Machine Translation Models Learn about Morphology?
Neural machine translation (MT) models obtain state-of-the-art performance while maintaining a simple, end-to-end architecture.
A General-Purpose Tagger with Convolutional Neural Networks
We present a general-purpose tagger based on convolutional neural networks (CNN), used for both composing word vectors and encoding context information.
Explaining Character-Aware Neural Networks for Word-Level Prediction: Do They Discover Linguistic Rules?
In this paper, we investigate which character-level patterns neural networks learn and if those patterns coincide with manually-defined word segmentations and annotations.
Free as in Free Word Order: An Energy Based Model for Word Segmentation and Morphological Tagging in Sanskrit
The configurational information in sentences of a free word order language such as Sanskrit is of limited use.
Tree-Stack LSTM in Transition Based Dependency Parsing
We introduce tree-stack LSTM to model state of a transition based parser with recurrent neural networks.