Linguistic Acceptability

47 papers with code • 5 benchmarks • 5 datasets

Linguistic Acceptability is the task of determining whether a sentence is grammatical or ungrammatical.

Image Source: Warstadt et al

JCoLA: Japanese Corpus of Linguistic Acceptability

osekilab/jcola 22 Sep 2023

In this paper, we introduce JCoLA (Japanese Corpus of Linguistic Acceptability), which consists of 10, 020 sentences annotated with binary acceptability judgments.

16
22 Sep 2023

NoCoLA: The Norwegian Corpus of Linguistic Acceptability

ltgoslo/nocola 13 Jun 2023

While there has been a surge of large language models for Norwegian in recent years, we lack any tool to evaluate their understanding of grammaticality.

2
13 Jun 2023

CUE: An Uncertainty Interpretation Framework for Text Classifiers Built on Pre-Trained Language Models

lijiazheng99/cue 6 Jun 2023

We then generate text representations by perturbing the latent space which causes fluctuation in predictive uncertainty.

8
06 Jun 2023

LM-CPPF: Paraphrasing-Guided Data Augmentation for Contrastive Prompt-Based Few-Shot Fine-Tuning

amirabaskohi/lm-cppf 29 May 2023

This paper proposes LM-CPPF, Contrastive Paraphrasing-guided Prompt-based Fine-tuning of Language Models, which leverages prompt-based few-shot paraphrasing using generative language models, especially large language models such as GPT-3 and OPT-175B, for data augmentation.

9
29 May 2023

Revisiting Acceptability Judgements

huhailinguist/colac 23 May 2023

We introduce CoLAC - Corpus of Linguistic Acceptability in Chinese, the first large-scale acceptability dataset for a non-Indo-European language.

3
23 May 2023

Can BERT eat RuCoLA? Topological Data Analysis to Explain

upunaprosk/la-tda 4 Apr 2023

Our results contribute to understanding the behavior of monolingual LMs in the acceptability classification task, provide insights into the functional roles of attention heads, and highlight the advantages of TDA-based approaches for analyzing LMs.

5
04 Apr 2023

ScandEval: A Benchmark for Scandinavian Natural Language Processing

saattrupdan/scandeval 3 Apr 2023

This paper introduces a Scandinavian benchmarking platform, ScandEval, which can benchmark any pretrained model on four different tasks in the Scandinavian languages.

56
03 Apr 2023

ChatGPT: Jack of all trades, master of none

clarin-pl/chatgpt-evaluation-01-2023 21 Feb 2023

Our comparison of its results with available State-of-the-Art (SOTA) solutions showed that the average loss in quality of the ChatGPT model was about 25% for zero-shot and few-shot evaluation.

28
21 Feb 2023

tasksource: A Dataset Harmonization Framework for Streamlined NLP Multi-Task Learning and Evaluation

sileod/tasksource 14 Jan 2023

We release a dataset annotation framework and dataset annotations for more than 500 English tasks\footnote{\url{https://github. com/sileod/tasksource}}.

119
14 Jan 2023

RuCoLA: Russian Corpus of Linguistic Acceptability

russiannlp/rucola 23 Oct 2022

Linguistic acceptability (LA) attracts the attention of the research community due to its many uses, such as testing the grammatical knowledge of language models and filtering implausible texts with acceptability classifiers.

37
23 Oct 2022