Lexical Simplification
19 papers with code • 0 benchmarks • 1 datasets
The goal of Lexical Simplification is to replace complex words (typically words that are used less often in language and are therefore less familiar to readers) with their simpler synonyms, without infringing the grammaticality and changing the meaning of the text.
Source: Adversarial Propagation and Zero-Shot Cross-Lingual Transfer of Word Vector Specialization
Benchmarks
These leaderboards are used to track progress in Lexical Simplification
Most implemented papers
LSBert: A Simple Framework for Lexical Simplification
Lexical simplification (LS) aims to replace complex words in a given sentence with their simpler alternatives of equivalent meaning, to simplify the sentence.
Chinese Lexical Simplification
Lexical simplification has attracted much attention in many languages, which is the process of replacing complex words in a given sentence with simpler alternatives of equivalent meaning.
UniHD at TSAR-2022 Shared Task: Is Compute All We Need for Lexical Simplification?
Previous state-of-the-art models for lexical simplification consist of complex pipelines with several components, each of which requires deep technical knowledge and fine-tuned interaction to achieve its full potential.
Controllable Lexical Simplification for English
Fine-tuning Transformer-based approaches have recently shown exciting results on sentence simplification task.
Teaching the Pre-trained Model to Generate Simple Texts for Text Simplification
In this paper, we propose a new continued pre-training strategy to teach the pre-trained model to generate simple texts.
Multilingual Controllable Transformer-Based Lexical Simplification
Moreover, further evaluation of our approach on the part of the recent TSAR-2022 multilingual LS shared-task dataset shows that our model performs competitively when compared with the participating systems for English LS and even outperforms the GPT-3 model on several metrics.
Multilingual Lexical Simplification via Paraphrase Generation
After feeding the input sentence into the encoder of paraphrase modeling, we generate the substitutes based on a novel decoding strategy that concentrates solely on the lexical variations of the complex word.
Unsupervised Lexical Simplification with Context Augmentation
We propose a new unsupervised lexical simplification method that uses only monolingual data and pre-trained language models.