Rethinking Self-Attention: Towards Interpretability in Neural Parsing

Attention mechanisms have improved the performance of NLP tasks while allowing models to remain explainable. Self-attention is currently widely used, however interpretability is difficult due to the numerous attention distributions... (read more)

PDF Abstract

Datasets


Results from the Paper


TASK DATASET MODEL METRIC NAME METRIC VALUE GLOBAL RANK RESULT BENCHMARK
Constituency Parsing Penn Treebank Label Attention Layer + HPSG + XLNet F1 score 96.38 # 1
Dependency Parsing Penn Treebank Label Attention Layer + HPSG + XLNet POS 97.3 # 4
UAS 97.42 # 1
LAS 96.26 # 1

Methods used in the Paper


METHOD TYPE
Interpretability
Image Models