Vision Transformers

XCiT

Introduced by El-Nouby et al. in XCiT: Cross-Covariance Image Transformers

Cross-Covariance Image Transformers, or XCiT, is a type of vision transformer that aims to combine the accuracy of conventional transformers with the scalability of convolutional architectures.

The self-attention operation underlying transformers yields global interactions between all tokens, i.e. words or image patches, and enables flexible modelling of image data beyond the local interactions of convolutions. This flexibility, however, comes with a quadratic complexity in time and memory, hindering application to long sequences and high-resolution images. The authors propose a “transposed” version of self-attention called cross-covariance attention that operates across feature channels rather than tokens, where the interactions are based on the cross-covariances matrix between keys and queries.

Source: XCiT: Cross-Covariance Image Transformers

Papers


Paper Code Results Date Stars

Tasks


Components


Component Type
XCiT Layer
Image Model Blocks

Categories