Morphological Tagging

23 papers with code • 0 benchmarks • 4 datasets

Morphological tagging is the task of assigning labels to a sequence of tokens that describe them morphologically. As compared to Part-of-speech tagging, morphological tagging also considers morphological features, such as case, gender or the tense of verbs.

Most implemented papers

Morphosyntactic Tagging with a Meta-BiLSTM Model over Context Sensitive Token Encodings

google/meta_tagger ACL 2018

In this paper, we investigate models that use recurrent neural networks with sentence-level context for initial character and word-based representations.

AlephBERT:A Hebrew Large Pre-Trained Language Model to Start-off your Hebrew NLP Application With

OnlpLab/Hebrew-Sentiment-Data 8 Apr 2021

Second, there are no accepted tasks and benchmarks to evaluate the progress of Hebrew PLMs on.

Sentence Embedding Models for Ancient Greek Using Multilingual Knowledge Distillation

TickleForce/ancient-greek-datasets 24 Aug 2023

In this work, we use a multilingual knowledge distillation approach to train BERT models to produce sentence embeddings for Ancient Greek text.

Neural Morphological Tagging from Characters for Morphologically Rich Languages

ziegler-ingo/cleavage_prediction 21 Jun 2016

We systematically explore a variety of neural architectures (DNN, CNN, CNNHighway, LSTM, BLSTM) to obtain character-based word vectors combined with bidirectional LSTMs to model across-word context in an end-to-end setting.

Multilingual Lexicalized Constituency Parsing with Word-Level Auxiliary Tasks

mcoavoux/mtg EACL 2017

We introduce a constituency parser based on a bi-LSTM encoder adapted from recent work (Cross and Huang, 2016b; Kiperwasser and Goldberg, 2016), which can incorporate a lower level character biLSTM (Ballesteros et al., 2015; Plank et al., 2016).

What do Neural Machine Translation Models Learn about Morphology?

boknilev/nmt-repr-analysis ACL 2017

Neural machine translation (MT) models obtain state-of-the-art performance while maintaining a simple, end-to-end architecture.

A General-Purpose Tagger with Convolutional Neural Networks

EggplantElf/sclem2017-tagger WS 2017

We present a general-purpose tagger based on convolutional neural networks (CNN), used for both composing word vectors and encoding context information.

Explaining Character-Aware Neural Networks for Word-Level Prediction: Do They Discover Linguistic Rules?

FredericGodin/ContextualDecomposition-NLP EMNLP 2018

In this paper, we investigate which character-level patterns neural networks learn and if those patterns coincide with manually-defined word segmentations and annotations.

Tree-Stack LSTM in Transition Based Dependency Parsing

kirnap/ku-dependency-parser2 CONLL 2018

We introduce tree-stack LSTM to model state of a transition based parser with recurrent neural networks.