TabNet: Attentive Interpretable Tabular Learning

20 Aug 2019  ·  Sercan O. Arik, Tomas Pfister ·

We propose a novel high-performance and interpretable canonical deep tabular data learning architecture, TabNet. TabNet uses sequential attention to choose which features to reason from at each decision step, enabling interpretability and more efficient learning as the learning capacity is used for the most salient features. We demonstrate that TabNet outperforms other neural network and decision tree variants on a wide range of non-performance-saturated tabular datasets and yields interpretable feature attributions plus insights into the global model behavior. Finally, for the first time to our knowledge, we demonstrate self-supervised learning for tabular data, significantly improving performance with unsupervised representation learning when unlabeled data is abundant.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Poker Hand Classification Poker Hand TabNet Test Accuracy 99.2 # 1
Poker Hand Classification Poker Hand CatBoost Test Accuracy 66.6 # 4
Poker Hand Classification Poker Hand LightGBM Test Accuracy 70.0 # 3
Poker Hand Classification Poker Hand XGBoost Test Accuracy 71.1 # 2
Poker Hand Classification Poker Hand Deep neural decision tree Test Accuracy 65.1 # 5
Poker Hand Classification Poker Hand Decision tree Test Accuracy 50.0 # 6

Methods