POS
269 papers with code • 4 benchmarks • 4 datasets
Libraries
Use these libraries to find POS models and implementationsMost implemented papers
Bidirectional LSTM-CRF Models for Sequence Tagging
It can also use sentence level tag information thanks to a CRF layer.
End-to-end Sequence Labeling via Bi-directional LSTM-CNNs-CRF
State-of-the-art sequence labeling systems traditionally require large amounts of task-specific knowledge in the form of hand-crafted features and data pre-processing.
Optimal Hyperparameters for Deep LSTM-Networks for Sequence Labeling Tasks
Selecting optimal parameters for a neural network architecture can often make the difference between mediocre and state-of-the-art performance.
Joint entity recognition and relation extraction as a multi-head selection problem
State-of-the-art models for joint entity recognition and relation extraction strongly rely on external natural language processing (NLP) tools such as POS (part-of-speech) taggers and dependency parsers.
Does Manipulating Tokenization Aid Cross-Lingual Transfer? A Study on POS Tagging for Non-Standardized Languages
This can for instance be observed when finetuning PLMs on one language and evaluating them on data in a closely related language variety with no standardized orthography.
Semantic Relation Classification via Bidirectional LSTM Networks with Entity-aware Attention using Latent Entity Typing
Our model not only utilizes entities and their latent types as features effectively but also is more interpretable by visualizing attention mechanisms applied to our model and results of LET.
Part-of-Speech Tagging with Bidirectional Long Short-Term Memory Recurrent Neural Network
Bidirectional Long Short-Term Memory Recurrent Neural Network (BLSTM-RNN) has been shown to be very effective for tagging sequential data, e. g. speech utterances or handwritten documents.
Automated Phrase Mining from Massive Text Corpora
As one of the fundamental tasks in text analysis, phrase mining aims at extracting quality phrases from a text corpus.
Transfer Learning for Sequence Tagging with Hierarchical Recurrent Networks
Recent papers have shown that neural networks obtain state-of-the-art performance on several different sequence tagging tasks.
PKUSEG: A Toolkit for Multi-Domain Chinese Word Segmentation
Through this method, we generate synthetic data using a large amount of unlabeled data in the target domain and then obtain a word segmentation model for the target domain.