Linformer is a linear Transformer that utilises a linear self-attention mechanism to tackle the self-attention bottleneck with Transformer models. The original scaled dot-product attention is decomposed into multiple smaller attentions through linear projections, such that the combination of these operations forms a low-rank factorization of the original attention.
Source: Linformer: Self-Attention with Linear ComplexityPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Language Modelling | 2 | 14.29% |
Retrieval | 1 | 7.14% |
Abstractive Text Summarization | 1 | 7.14% |
Machine Translation | 1 | 7.14% |
Text Generation | 1 | 7.14% |
Text Summarization | 1 | 7.14% |
Classification | 1 | 7.14% |
Image Classification | 1 | 7.14% |
Image Generation | 1 | 7.14% |