Content-based attention is an attention mechanism based on cosine similarity:
$$f_{att}\left(\textbf{h}_{i}, \textbf{s}_{j}\right) = \cos\left[\textbf{h}_{i};\textbf{s}_{j}\right] $$
It was utilised in Neural Turing Machines as part of the Addressing Mechanism.
We produce a normalized attention weighting by taking a softmax over these attention alignment scores.
Source: Neural Turing MachinesPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Question Answering | 5 | 13.16% |
Machine Translation | 3 | 7.89% |
Speech Recognition | 2 | 5.26% |
Image Classification | 2 | 5.26% |
BIG-bench Machine Learning | 2 | 5.26% |
Information Retrieval | 2 | 5.26% |
Scene Text Recognition | 1 | 2.63% |
Action Detection | 1 | 2.63% |
Activity Detection | 1 | 2.63% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |