Longformer is a modified Transformer architecture. Traditional Transformer-based models are unable to process long sequences due to their self-attention operation, which scales quadratically with the sequence length. To address this, Longformer uses an attention pattern that scales linearly with sequence length, making it easy to process documents of thousands of tokens or longer. The attention mechanism is a drop-in replacement for the standard self-attention and combines a local windowed attention with a task motivated global attention.
The attention patterns utilised include: sliding window attention, dilated sliding window attention and global + sliding window. These can be viewed in the components section of this page.
Source: Longformer: The Long-Document TransformerPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Language Modelling | 13 | 7.88% |
Sentence | 11 | 6.67% |
Decoder | 10 | 6.06% |
Document Classification | 10 | 6.06% |
Question Answering | 9 | 5.45% |
Abstractive Text Summarization | 6 | 3.64% |
Text Classification | 6 | 3.64% |
Classification | 6 | 3.64% |
Natural Language Inference | 5 | 3.03% |