Lexical Complexity Prediction

8 papers with code • 0 benchmarks • 0 datasets

Predicting the complexity of a word/multi-word expression in a sentence.

Most implemented papers

cs60075_team2 at SemEval-2021 Task 1 : Lexical Complexity Prediction using Transformer-based Language Models pre-trained on various text corpora

abhi1nandy2/CS60075-Team-2-Task-1 4 Jun 2021

This paper describes the performance of the team cs60075_team2 at SemEval 2021 Task 1 - Lexical Complexity Prediction.

Japanese Lexical Complexity for Non-Native Readers: A New Dataset

naist-nlp/jalecon 30 Jun 2023

Lexical complexity prediction (LCP) is the task of predicting the complexity of words in a text on a continuous scale.

CompLex: A New Corpus for Lexical Complexity Prediction from Likert Scale Data

neilrs123/Lexical-Complexity-Prediction 16 Mar 2020

With a few exceptions, previous studies have approached the task as a binary classification task in which systems predict a complexity value (complex vs. non-complex) for a set of target words in a text.

IITK@LCP at SemEval 2021 Task 1: Classification for Lexical Complexity Regression Task

neilrs123/Lexical-Complexity-Prediction 2 Apr 2021

This paper describes our contribution to SemEval 2021 Task 1: Lexical Complexity Prediction.

BigGreen at SemEval-2021 Task 1: Lexical Complexity Prediction with Assembly Models

Aadil101/BigGreen-at-LCP-2021 SEMEVAL 2021

This paper describes a system submitted by team BigGreen to LCP 2021 for predicting the lexical complexity of English words in a given context.

ANDI at SemEval-2021 Task 1: Predicting complexity in context using distributional models, behavioural norms, and lexical resources

armandrotaru/teamandi-lcp SEMEVAL 2021

In this paper we describe our participation in the Lexical Complexity Prediction (LCP) shared task of SemEval 2021, which involved predicting subjective ratings of complexity for English single words and multi-word expressions, presented in context.

cs60075\_team2 at SemEval-2021 Task 1 : Lexical Complexity Prediction using Transformer-based Language Models pre-trained on various text corpora

abhi1nandy2/CS60075-Team-2-Task-1 SEMEVAL 2021

The main contribution of this paper is to fine-tune transformer-based language models pre-trained on several text corpora, some being general (E. g., Wikipedia, BooksCorpus), some being the corpora from which the CompLex Dataset was extracted, and others being from other specific domains such as Finance, Law, etc.

Automatic Readability Assessment of German Sentences with Transformer Ensembles

dslaborg/tcc2022 GermEval 2022

In this contribution, we studied the ability of ensembles of fine-tuned GBERT and GPT-2-Wechsel models to reliably predict the readability of German sentences.