POS
253 papers with code • 4 benchmarks • 4 datasets
Libraries
Use these libraries to find POS models and implementationsLatest papers
BanglaAutoKG: Automatic Bangla Knowledge Graph Construction with Semantic Neural Graph Filtering
Knowledge Graphs (KGs) have proven essential in information processing and reasoning applications because they link related entities and give context-rich information, supporting efficient information retrieval and knowledge discovery; presenting information flow in a very effective manner.
Analyzing Reward Dynamics and Decentralization in Ethereum 2.0: An Advanced Data Engineering Workflow and Comprehensive Datasets for Proof-of-Stake Incentives
Ethereum 2. 0, as the preeminent smart contract blockchain platform, guarantees the precise execution of applications without third-party intervention.
OrderBkd: Textual backdoor attack through repositioning
The use of third-party datasets and pre-trained machine learning models poses a threat to NLP systems due to possibility of hidden backdoor attacks.
ToPro: Token-Level Prompt Decomposition for Cross-Lingual Sequence Labeling Tasks
However, most previous studies primarily focused on sentence-level classification tasks, and only a few considered token-level labeling tasks such as Named Entity Recognition (NER) and Part-of-Speech (POS) tagging.
Multi-Task Learning for Front-End Text Processing in TTS
We propose a multi-task learning (MTL) model for jointly performing three tasks that are commonly solved in a text-to-speech (TTS) front-end: text normalization (TN), part-of-speech (POS) tagging, and homograph disambiguation (HD).
Def2Vec: Extensible Word Embeddings from Dictionary Definitions
Def2Vec introduces a novel paradigm for word embeddings, leveraging dictionary definitions to learn semantic representations.
calamanCy: A Tagalog Natural Language Processing Toolkit
We introduce calamanCy, an open-source toolkit for constructing natural language processing (NLP) pipelines for Tagalog.
Probing LLMs for Joint Encoding of Linguistic Categories
Large Language Models (LLMs) exhibit impressive performance on a range of NLP tasks, due to the general-purpose linguistic knowledge acquired during pretraining.
ZGUL: Zero-shot Generalization to Unseen Languages using Multi-source Ensembling of Language Adapters
We posit that for more effective cross-lingual transfer, instead of just one source LA, we need to leverage LAs of multiple (linguistically or geographically related) source languages, both at train and test-time - which we investigate via our novel neural architecture, ZGUL.
Improving Cross-Lingual Transfer through Subtree-Aware Word Reordering
Despite the impressive growth of the abilities of multilingual language models, such as XLM-R and mT5, it has been shown that they still face difficulties when tackling typologically-distant languages, particularly in the low-resource setting.