Transformers

Fastformer

Introduced by Wu et al. in Fastformer: Additive Attention Can Be All You Need

Fastformer is an type of Transformer which uses additive attention as a building block. Instead of modeling the pair-wise interactions between tokens, additive attention is used to model global contexts, and then each token representation is further transformed based on its interaction with global context representations.

Source: Fastformer: Additive Attention Can Be All You Need

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Mamba 1 25.00%
Self-Supervised Learning 1 25.00%
Text Classification 1 25.00%
Text Summarization 1 25.00%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories