Position-aware Self-attention with Relative Positional Encodings for Slot Filling

9 Jul 2018  ·  Ivan Bilan, Benjamin Roth ·

This paper describes how to apply self-attention with relative positional encodings to the task of relation extraction. We propose to use the self-attention encoder layer together with an additional position-aware attention layer that takes into account positions of the query and the object in the sentence. The self-attention encoder also uses a custom implementation of relative positional encodings which allow each word in the sentence to take into account its left and right context. The evaluation of the model is done on the TACRED dataset. The proposed model relies only on attention (no recurrent or convolutional layers are used), while improving performance w.r.t. the previous state of the art.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here