12 papers with code • 1 benchmarks • 1 datasets
Lexical normalization is the task of translating/transforming a non standard text to a standard register.
new pix comming tomoroe new pictures coming tomorrow
Datasets usually consists of tweets, since these naturally contain a fair amount of these phenomena.
For lexical normalization, only replacements on the word-level are annotated. Some corpora include annotation for 1-N and N-1 replacements. However, word insertion/deletion and reordering is not part of the task.
We show that MoNoise beats the state-of-the-art on different normalization benchmarks for English and Dutch, which all define the task of normalization slightly different.
Recently introduced neural network parsers allow for new approaches to circumvent data sparsity issues by modeling character level information and by exploiting raw data in a semi-supervised setting.
Social media offer an abundant source of valuable raw data, however informal writing can quickly become a bottleneck for many natural language processing (NLP) tasks.
In this paper, we introduce and demonstrate the online demo as well as the command line interface of a lexical normalization system (MoNoise) for a variety of languages.
Our model achieves high accuracy for classification on this dataset and outperforms the previous model for multilingual text classification, highlighting language independence of McM.
Such informal and code-switched content are under-resourced in terms of labeled datasets and language models even for popular tasks like sentiment classification.
Roman Urdu is an informal form of the Urdu language written in Roman script, which is widely used in South Asia for online textual content.
Lexical normalization, the translation of non-canonical data to standard language, has shown to improve the performance of many natural language processing tasks on social media.
Morphological analysis (MA) and lexical normalization (LN) are both important tasks for Japanese user-generated text (UGT).