Relative Position Encodings are a type of position embeddings for Transformer-based models that attempts to exploit pairwise, relative positional information. Relative positional information is supplied to the model on two levels: values and keys. This becomes apparent in the two modified self-attention equations shown below. First, relative positional information is supplied to the model as an additional component to the keys
$$ e_{ij} = \frac{x_{i}W^{Q}\left(x_{j}W^{K} + a^{K}_{ij}\right)^{T}}{\sqrt{d_{z}}} $$
Here $a$ is an edge representation for the inputs $x_{i}$ and $x_{j}$. The softmax operation remains unchanged from vanilla self-attention. Then relative positional information is supplied again as a sub-component of the values matrix:
$$ z_{i} = \sum^{n}_{j=1}\alpha_{ij}\left(x_{j}W^{V} + a_{ij}^{V}\right)$$
In other words, instead of simply combining semantic embeddings with absolute positional ones, relative positional information is added to keys and values on the fly during attention calculation.
Source: Jake Tae
Image Source: [Relative Positional Encoding for Transformers with Linear Complexity](https://www.youtube.com/watch?v=qajudaEHuq8
Source: Self-Attention with Relative Position RepresentationsPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Language Modelling | 3 | 6.52% |
Management | 2 | 4.35% |
Image Generation | 2 | 4.35% |
Question Answering | 2 | 4.35% |
Reinforcement Learning (RL) | 2 | 4.35% |
Multi-Armed Bandits | 2 | 4.35% |
Thompson Sampling | 2 | 4.35% |
Machine Translation | 2 | 4.35% |
Translation | 2 | 4.35% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |