Lexical Simplification
16 papers with code • 0 benchmarks • 1 datasets
The goal of Lexical Simplification is to replace complex words (typically words that are used less often in language and are therefore less familiar to readers) with their simpler synonyms, without infringing the grammaticality and changing the meaning of the text.
Source: Adversarial Propagation and Zero-Shot Cross-Lingual Transfer of Word Vector Specialization
Benchmarks
These leaderboards are used to track progress in Lexical Simplification
Most implemented papers
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Language model pretraining has led to significant performance gains but careful comparison between different approaches is challenging.
Lexical Simplification with Pretrained Encoders
Lexical simplification (LS) aims to replace complex words in a given sentence with their simpler alternatives of equivalent meaning.
Multi-Word Lexical Simplification
In this work we propose the task of multi-word lexical simplification, in which a sentence in natural language is made easier to understand by replacing its fragment with a simpler alternative, both of which can consist of many words.
Lexical Simplification Benchmarks for English, Portuguese, and Spanish
To showcase the usability of the dataset, we adapt two state-of-the-art lexical simplification systems with differing architectures (neural vs.\ non-neural) to all three languages (English, Spanish, and Brazilian Portuguese) and evaluate their performances on our new dataset.
Exploring Neural Text Simplification Models
Unlike the previously proposed automated TS systems, our neural text simplification (NTS) systems are able to simultaneously perform lexical simplification and content reduction.
Adversarial Propagation and Zero-Shot Cross-Lingual Transfer of Word Vector Specialization
Our adversarial post-specialization method propagates the external lexical knowledge to the full distributional space.
A Word-Complexity Lexicon and A Neural Readability Ranking Model for Lexical Simplification
Current lexical simplification approaches rely heavily on heuristics and corpus level features that do not always align with human judgment.
Specializing Unsupervised Pretraining Models for Word-Level Semantic Similarity
In this work, we complement such distributional knowledge with external lexical knowledge, that is, we integrate the discrete knowledge on word-level semantic similarity into pretraining.