Scaled dot-product attention is an attention mechanism where the dot products are scaled down by $\sqrt{d_k}$. Formally we have a query $Q$, a key $K$ and a value $V$ and calculate the attention as:
$$ {\text{Attention}}(Q, K, V) = \text{softmax}\left(\frac{QK^{T}}{\sqrt{d_k}}\right)V $$
If we assume that $q$ and $k$ are $d_k$-dimensional vectors whose components are independent random variables with mean $0$ and variance $1$, then their dot product, $q \cdot k = \sum_{i=1}^{d_k} u_iv_i$, has mean $0$ and variance $d_k$. Since we would prefer these values to have variance $1$, we divide by $\sqrt{d_k}$.
Source: Attention Is All You NeedPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Language Modelling | 71 | 8.92% |
Large Language Model | 39 | 4.90% |
Semantic Segmentation | 29 | 3.64% |
Retrieval | 17 | 2.14% |
Prompt Engineering | 14 | 1.76% |
Object Detection | 14 | 1.76% |
Benchmarking | 13 | 1.63% |
Autonomous Driving | 12 | 1.51% |
Question Answering | 12 | 1.51% |