Linguistically-Informed Self-Attention for Semantic Role Labeling

EMNLP 2018 Emma Strubell • Patrick Verga • Daniel Andor • David Weiss • Andrew McCallum

In this work, we present linguistically-informed self-attention (LISA): a neural network model that combines multi-head self-attention with multi-task learning across dependency parsing, part-of-speech tagging, predicate detection and SRL. Unlike previous models which require significant pre-processing to prepare linguistic features, LISA can incorporate syntax using merely raw tokens as input, encoding the sequence only once to simultaneously perform parsing, predicate detection and role labeling for all predicates. In experiments on CoNLL-2005 SRL, LISA achieves new state-of-the-art performance for a model using predicted predicates and standard word embeddings, attaining 2.5 F1 absolute higher than the previous state-of-the-art on newswire and more than 3.5 F1 on out-of-domain data, nearly 10% reduction in error.

Full paper

Evaluation


Task Dataset Model Metric name Metric value Global rank Compare
Semantic Role Labeling (predicted predicates) CoNLL 2005 LISA + ELMo F1 86.90 # 1
Semantic Role Labeling CoNLL 2005 LISA F1 86.04 # 4
Semantic Role Labeling (predicted predicates) CoNLL 2005 LISA F1 84.99 # 3
Predicate Detection CoNLL 2005 LISA F1 98.4 # 1
Semantic Role Labeling (predicted predicates) CoNLL 2012 LISA + ELMo F1 83.38 # 1
Semantic Role Labeling (predicted predicates) CoNLL 2012 LISA F1 82.33 # 3
Predicate Detection CoNLL 2012 LISA F1 97.2 # 1