RuCoLA: Russian Corpus of Linguistic Acceptability

Linguistic acceptability (LA) attracts the attention of the research community due to its many uses, such as testing the grammatical knowledge of language models and filtering implausible texts with acceptability classifiers. However, the application scope of LA in languages other than English is limited due to the lack of high-quality resources. To this end, we introduce the Russian Corpus of Linguistic Acceptability (RuCoLA), built from the ground up under the well-established binary LA approach. RuCoLA consists of $9.8$k in-domain sentences from linguistic publications and $3.6$k out-of-domain sentences produced by generative models. The out-of-domain set is created to facilitate the practical use of acceptability for improving language generation. Our paper describes the data collection protocol and presents a fine-grained analysis of acceptability classification experiments with a range of baseline approaches. In particular, we demonstrate that the most widely used language models still fall behind humans by a large margin, especially when detecting morphological and semantic errors. We release RuCoLA, the code of experiments, and a public leaderboard (rucola-benchmark.com) to assess the linguistic competence of language models for Russian.

PDF Abstract

Datasets


Introduced in the Paper:

RuCoLA

Used in the Paper:

GLUE CoLA WikiMatrix ItaCoLA
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Linguistic Acceptability CoLA RemBERT MCC 0.6 # 3
Linguistic Acceptability ItaCoLA mBERT MCC 0.36 # 4
Linguistic Acceptability ItaCoLA XLM-R MCC 0.52 # 2
Linguistic Acceptability RuCoLA XLM-R Accuracy 61.13 # 7
MCC 0.13 # 9
Linguistic Acceptability RuCoLA ruT5 Accuracy 68.41 # 6
MCC 0.25 # 7
Linguistic Acceptability RuCoLA ruRoBERTa Accuracy 79.34 # 3
MCC 0.53 # 2
Linguistic Acceptability RuCoLA ruBERT Accuracy 74.3 # 5
MCC 0.42 # 5
Linguistic Acceptability RuCoLA ruGPT-3 Accuracy 53.82 # 8
MCC 0.30 # 6
Linguistic Acceptability RuCoLA mBERT MCC 0.15 # 8
Linguistic Acceptability RuCoLA RemBERT Accuracy 75.06 # 4
MCC 0.44 # 4

Methods


No methods listed for this paper. Add relevant methods here