Acceptability Judgements via Examining the Topology of Attention Maps

The role of the attention mechanism in encoding linguistic knowledge has received special interest in NLP. However, the ability of the attention heads to judge the grammatical acceptability of a sentence has been underexplored. This paper approaches the paradigm of acceptability judgments with topological data analysis (TDA), showing that the geometric properties of the attention graph can be efficiently exploited for two standard practices in linguistics: binary judgments and linguistic minimal pairs. Topological features enhance the BERT-based acceptability classifier scores by $8$%-$24$% on CoLA in three languages (English, Italian, and Swedish). By revealing the topological discrepancy between attention maps of minimal pairs, we achieve the human-level performance on the BLiMP benchmark, outperforming nine statistical and Transformer LM baselines. At the same time, TDA provides the foundation for analyzing the linguistic functions of attention heads and interpreting the correspondence between the graph features and grammatical phenomena.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Linguistic Acceptability CoLA En-BERT + TDA Accuracy 82.1% # 6
MCC 0.565 # 4
Linguistic Acceptability CoLA En-BERT + TDA + PCA Accuracy 88.6% # 1
Linguistic Acceptability CoLA Dev En-BERT + TDA Accuracy 88.6 # 1
MCC 0.725 # 1
Linguistic Acceptability CoLA Dev En-BERT (pre-trained) + TDA MCC 0.420 # 2
Linguistic Acceptability CoLA Dev XLM-R (pre-trained) + TDA Accuracy 73 # 2
Linguistic Acceptability DaLAJ Sw-BERT + H0M Accuracy 76.9 # 1
MCC 0.542 # 1
Linguistic Acceptability ItaCoLA It-BERT (pre-trained) + TDA Accuracy 89.2 # 2
MCC 0.478 # 3
Linguistic Acceptability ItaCoLA XLM-R + TDA Accuracy 92.8 # 1
MCC 0.683 # 1

Methods