Lexical Normalization
15 papers with code • 1 benchmarks • 1 datasets
Lexical normalization is the task of translating/transforming a non standard text to a standard register.
Example:
new pix comming tomoroe
new pictures coming tomorrow
Datasets usually consists of tweets, since these naturally contain a fair amount of these phenomena.
For lexical normalization, only replacements on the word-level are annotated. Some corpora include annotation for 1-N and N-1 replacements. However, word insertion/deletion and reordering is not part of the task.
Latest papers
ViLexNorm: A Lexical Normalization Corpus for Vietnamese Social Media Text
In this work, we introduce Vietnamese Lexical Normalization (ViLexNorm), the first-ever corpus developed for the Vietnamese lexical normalization task.
Automatic Textual Normalization for Hate Speech Detection
Our dataset is accessible for research purposes.
ÚFAL at MultiLexNorm 2021: Improving Multilingual Lexical Normalization by Fine-tuning ByT5
We present the winning entry to the Multilingual Lexical Normalization (MultiLexNorm) shared task at W-NUT 2021 (van der Goot et al., 2021a), which evaluates lexical-normalization systems on 12 social media datasets in 11 languages.
DaN+: Danish Nested Named Entities and Lexical Normalization
We examine language-specific versus multilingual BERT, and study the effect of lexical normalization on NER.
User-Generated Text Corpus for Evaluating Japanese Morphological Analysis and Lexical Normalization
Morphological analysis (MA) and lexical normalization (LN) are both important tasks for Japanese user-generated text (UGT).
Lexical Normalization for Code-switched Data and its Effect on POS Tagging
Lexical normalization, the translation of non-canonical data to standard language, has shown to improve the performance of many natural language processing tasks on social media.
A Clustering Framework for Lexical Normalization of Roman Urdu
Roman Urdu is an informal form of the Urdu language written in Roman script, which is widely used in South Asia for online textual content.
Adapting Deep Learning for Sentiment Classification of Code-Switched Informal Short Text
Such informal and code-switched content are under-resourced in terms of labeled datasets and language models even for popular tasks like sentiment classification.
A Multi-cascaded Deep Model for Bilingual SMS Classification
Our model achieves high accuracy for classification on this dataset and outperforms the previous model for multilingual text classification, highlighting language independence of McM.
MoNoise: A Multi-lingual and Easy-to-use Lexical Normalization Tool
In this paper, we introduce and demonstrate the online demo as well as the command line interface of a lexical normalization system (MoNoise) for a variety of languages.