Transformers

MATE

Introduced by Eisenschlos et al. in MATE: Multi-view Attention for Table Transformer Efficiency

MATE is a Transformer architecture designed to model the structure of web tables. It uses sparse attention in a way that allows heads to efficiently attend to either rows or columns in a table. Each attention head reorders the tokens by either column or row index and then applies a windowed attention mechanism. Unlike traditional self-attention, Mate scales linearly in the sequence length.

Source: MATE: Multi-view Attention for Table Transformer Efficiency

Papers


Paper Code Results Date Stars

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories