Deep Tabular Learning

FT-Transformer

Introduced by Gorishniy et al. in Revisiting Deep Learning Models for Tabular Data

FT-Transformer (Feature Tokenizer + Transformer) is a simple adaptation of the Transformer architecture for the tabular domain. The model (Feature Tokenizer component) transforms all features (categorical and numerical) to tokens and runs a stack of Transformer layers over the tokens, so every Transformer layer operates on the feature level of one object. (This model is similar to AutoInt). In the Transformer component, the [CLS] token is appended to $T$. Then $L$ Transformer layers are applied. PreNorm is used for easier optimization and good performance. The final representation of the [CLS] token is used for prediction.

Source: Revisiting Deep Learning Models for Tabular Data

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Self-Supervised Learning 2 66.67%
Federated Learning 1 33.33%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories