MATE is a Transformer architecture designed to model the structure of web tables. It uses sparse attention in a way that allows heads to efficiently attend to either rows or columns in a table. Each attention head reorders the tokens by either column or row index and then applies a windowed attention mechanism. Unlike traditional self-attention, Mate scales linearly in the sequence length.
Source: MATE: Multi-view Attention for Table Transformer EfficiencyPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Automatic Speech Recognition (ASR) | 1 | 7.69% |
Domain Generalization | 1 | 7.69% |
Language Modelling | 1 | 7.69% |
Speech Recognition | 1 | 7.69% |
Camera Calibration | 1 | 7.69% |
3D Object Classification | 1 | 7.69% |
Point Cloud Classification | 1 | 7.69% |
Multi-agent Reinforcement Learning | 1 | 7.69% |
Reinforcement Learning (RL) | 1 | 7.69% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |