Attention Mechanisms

Locality Sensitive Hashing Attention

Introduced by Kitaev et al. in Reformer: The Efficient Transformer

LSH Attention, or Locality Sensitive Hashing Attention is a replacement for dot-product attention with one that uses locality-sensitive hashing, changing its complexity from O($L^2$) to O($L\log L$), where $L$ is the length of the sequence. LSH refers to a family of functions (known as LSH families) to hash data points into buckets so that data points near each other are located in the same buckets with high probability, while data points far from each other are likely to be in different buckets. It was proposed as part of the Reformer architecture.

Source: Reformer: The Efficient Transformer

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Language Modelling 3 7.14%
Time Series Analysis 2 4.76%
Time Series Forecasting 2 4.76%
Sentence 2 4.76%
Image Generation 2 4.76%
Survey 2 4.76%
Deep Learning 2 4.76%
Reinforcement Learning (RL) 2 4.76%
Deblurring 1 2.38%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories