Attention Modules

LeViT Attention Block

Introduced by Graham et al. in LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference

LeViT Attention Block is a module used for attention in the LeViT architecture. Its main feature is providing positional information within each attention block, i.e. where we explicitly inject relative position information in the attention mechanism. This is achieved by adding an attention bias to the attention maps.

Source: LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Anomaly Detection 1 33.33%
General Classification 1 33.33%
Image Classification 1 33.33%

Categories