MATE is a Transformer architecture designed to model the structure of web tables. It uses sparse attention in a way that allows heads to efficiently attend to either rows or columns in a table. Each attention head reorders the tokens by either column or row index and then applies a windowed attention mechanism. Unlike traditional self-attention, Mate scales linearly in the sequence length.
Source: MATE: Multi-view Attention for Table Transformer EfficiencyPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Language Modelling | 2 | 8.70% |
Mamba | 1 | 4.35% |
Text-to-Video Generation | 1 | 4.35% |
Video Generation | 1 | 4.35% |
Cross-Modal Retrieval | 1 | 4.35% |
Multimodal Sentiment Analysis | 1 | 4.35% |
Sentiment Analysis | 1 | 4.35% |
Term Extraction | 1 | 4.35% |
Automatic Speech Recognition (ASR) | 1 | 4.35% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |