Part-Of-Speech Tagging

213 papers with code • 17 benchmarks • 26 datasets

Part-of-speech tagging (POS tagging) is the task of tagging a word in a text with its part of speech. A part of speech is a category of words with similar grammatical properties. Common English parts of speech are noun, verb, adjective, adverb, pronoun, preposition, conjunction, etc.

Example:

Vinken , 61 years old
NNP , CD NNS JJ

Libraries

Use these libraries to find Part-Of-Speech Tagging models and implementations
2 papers
1,877

Most implemented papers

Towards Deep Learning Models Resistant to Adversarial Attacks

MadryLab/mnist_challenge ICLR 2018

Its principled nature also enables us to identify methods for both training and attacking neural networks that are reliable and, in a certain sense, universal.

End-to-end Sequence Labeling via Bi-directional LSTM-CNNs-CRF

guillaumegenthial/sequence_tagging ACL 2016

State-of-the-art sequence labeling systems traditionally require large amounts of task-specific knowledge in the form of hand-crafted features and data pre-processing.

Ask Me Anything: Dynamic Memory Networks for Natural Language Processing

DongjunLee/dmn-tensorflow 24 Jun 2015

Most tasks in natural language processing can be cast into question answering (QA) problems over language input.

ZEN: Pre-training Chinese Text Encoder Enhanced by N-gram Representations

sinovation/ZEN Findings of the Association for Computational Linguistics 2020

Moreover, it is shown that reasonable performance can be obtained when ZEN is trained on a small corpus, which is important for applying pre-training techniques to scenarios with limited data.

CamemBERT: a Tasty French Language Model

huggingface/transformers ACL 2020

We show that the use of web crawled data is preferable to the use of Wikipedia data.

Does Manipulating Tokenization Aid Cross-Lingual Transfer? A Study on POS Tagging for Non-Standardized Languages

mainlp/noisydialect 20 Apr 2023

This can for instance be observed when finetuning PLMs on one language and evaluating them on data in a closely related language variety with no standardized orthography.

Part-of-Speech Tagging with Bidirectional Long Short-Term Memory Recurrent Neural Network

aneesh-joshi/LSTM_POS_Tagger 21 Oct 2015

Bidirectional Long Short-Term Memory Recurrent Neural Network (BLSTM-RNN) has been shown to be very effective for tagging sequential data, e. g. speech utterances or handwritten documents.

Transfer Learning for Sequence Tagging with Hierarchical Recurrent Networks

kimiyoung/transfer 18 Mar 2017

Recent papers have shown that neural networks obtain state-of-the-art performance on several different sequence tagging tasks.

Multilingual Part-of-Speech Tagging with Bidirectional Long Short-Term Memory Models and Auxiliary Loss

bplank/bilstm-aux ACL 2016

Bidirectional long short-term memory (bi-LSTM) networks have recently proven successful for various NLP sequence modeling tasks, but little is known about their reliance to input representations, target languages, data set size, and label noise.

Semi-supervised Multitask Learning for Sequence Labeling

marekrei/sequence-labeler ACL 2017

We propose a sequence labeling framework with a secondary training objective, learning to predict surrounding words for every word in the dataset.