Lexical Simplification

19 papers with code • 0 benchmarks • 1 datasets

The goal of Lexical Simplification is to replace complex words (typically words that are used less often in language and are therefore less familiar to readers) with their simpler synonyms, without infringing the grammaticality and changing the meaning of the text.

Source: Adversarial Propagation and Zero-Shot Cross-Lingual Transfer of Word Vector Specialization

Datasets


Most implemented papers

RoBERTa: A Robustly Optimized BERT Pretraining Approach

pytorch/fairseq 26 Jul 2019

Language model pretraining has led to significant performance gains but careful comparison between different approaches is challenging.

Lexical Simplification with Pretrained Encoders

qiang2100/BERT-LS 14 Jul 2019

Lexical simplification (LS) aims to replace complex words in a given sentence with their simpler alternatives of equivalent meaning.

Multi-Word Lexical Simplification

piotrmp/mwls1 COLING 2020

In this work we propose the task of multi-word lexical simplification, in which a sentence in natural language is made easier to understand by replacing its fragment with a simpler alternative, both of which can consist of many words.

Lexical Simplification Benchmarks for English, Portuguese, and Spanish

lastus-taln-upf/tsar-2022-shared-task 12 Sep 2022

To showcase the usability of the dataset, we adapt two state-of-the-art lexical simplification systems with differing architectures (neural vs.\ non-neural) to all three languages (English, Spanish, and Brazilian Portuguese) and evaluate their performances on our new dataset.

Exploring Neural Text Simplification Models

senisioi/NeuralTextSimplification ACL 2017

Unlike the previously proposed automated TS systems, our neural text simplification (NTS) systems are able to simultaneously perform lexical simplification and content reduction.

Adversarial Propagation and Zero-Shot Cross-Lingual Transfer of Word Vector Specialization

cambridgeltl/adversarial-postspec EMNLP 2018

Our adversarial post-specialization method propagates the external lexical knowledge to the full distributional space.

A Word-Complexity Lexicon and A Neural Readability Ranking Model for Lexical Simplification

mounicam/lexical_simplification EMNLP 2018

Current lexical simplification approaches rely heavily on heuristics and corpus level features that do not always align with human judgment.

Specializing Unsupervised Pretraining Models for Word-Level Semantic Similarity

anlausch/LIBERT COLING 2020

In this work, we complement such distributional knowledge with external lexical knowledge, that is, we integrate the discrete knowledge on word-level semantic similarity into pretraining.