Syntax-Aware Graph-to-Graph Transformer for Semantic Role Labelling

15 Apr 2021  ·  Alireza Mohammadshahi, James Henderson ·

Recent models have shown that incorporating syntactic knowledge into the semantic role labelling (SRL) task leads to a significant improvement. In this paper, we propose Syntax-aware Graph-to-Graph Transformer (SynG2G-Tr) model, which encodes the syntactic structure using a novel way to input graph relations as embeddings, directly into the self-attention mechanism of Transformer. This approach adds a soft bias towards attention patterns that follow the syntactic structure but also allows the model to use this information to learn alternative patterns. We evaluate our model on both span-based and dependency-based SRL datasets, and outperform previous alternative methods in both in-domain and out-of-domain settings, on CoNLL 2005 and CoNLL 2009 datasets.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


Ranked #6 on Semantic Role Labeling on CoNLL 2005 (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Semantic Role Labeling CoNLL 2005 Mohammadshahi and Henderson (2021) F1 88.93 # 6

Methods