Linguistic Acceptability

47 papers with code • 5 benchmarks • 5 datasets

Linguistic Acceptability is the task of determining whether a sentence is grammatical or ungrammatical.

Image Source: Warstadt et al

Latest papers with no code

MELA: Multilingual Evaluation of Linguistic Acceptability

no code yet • 15 Nov 2023

Recent benchmarks for Large Language Models (LLMs) have mostly focused on application-driven tasks such as complex reasoning and code generation, and this has led to a scarcity in purely linguistic evaluation of LLMs.

Data-Free Distillation of Language Model by Text-to-Text Transfer

no code yet • 3 Nov 2023

Data-Free Knowledge Distillation (DFKD) plays a vital role in compressing the model when original training data is unavailable.

Not all layers are equally as important: Every Layer Counts BERT

no code yet • 3 Nov 2023

This paper introduces a novel modification of the transformer architecture, tailored for the data-efficient pretraining of language models.

How well can machine-generated texts be identified and can language models be trained to avoid identification?

no code yet • 25 Oct 2023

Shallow learning classifiers differ from human-based detection, especially when using higher temperature values during text generation, resulting in a lower detection rate.

Defense of Adversarial Ranking Attack in Text Retrieval: Benchmark and Baseline via Detection

no code yet • 31 Jul 2023

Neural ranking models (NRMs) have undergone significant development and have become integral components of information retrieval (IR) systems.

A Neural-Symbolic Approach Towards Identifying Grammatically Correct Sentences

no code yet • 16 Jul 2023

Through combining Classic with Modern AI, which involves the blending of grammatical and syntactical rules with language models, we effectively tackle the Corpus of Linguistic Acceptability (COLA), a task that shows whether or not a sequence of words is an English grammatical sentence.

Bag of Tricks for Effective Language Model Pretraining and Downstream Adaptation: A Case Study on GLUE

no code yet • 18 Feb 2023

This technical report briefly describes our JDExplore d-team's submission Vega v1 on the General Language Understanding Evaluation (GLUE) leaderboard, where GLUE is a collection of nine natural language understanding tasks, including question answering, linguistic acceptability, sentiment analysis, text similarity, paraphrase detection, and natural language inference.

Cross-Architecture Distillation Using Bidirectional CMOW Embeddings

no code yet • 29 Sep 2021

We match or exceed the scores of ELMo, and only fall behind more expensive models on linguistic acceptability.

Revisiting the Uniform Information Density Hypothesis

no code yet • EMNLP 2021

The uniform information density (UID) hypothesis posits a preference among language users for utterances structured such that information is distributed uniformly across a signal.

An Automated Knowledge Mining and Document Classification System with Multi-model Transfer Learning

no code yet • 24 Jun 2021

The performance of the proposed system has been evaluated by comparing with two robust baseline methods, BERT and BERT-CNN.