TabFact: A Large-scale Dataset for Table-based Fact Verification

The problem of verifying whether a textual hypothesis holds based on the given evidence, also known as fact verification, plays an important role in the study of natural language understanding and semantic representation. However, existing studies are mainly restricted to dealing with unstructured evidence (e.g., natural language sentences and documents, news, etc), while verification under structured evidence, such as tables, graphs, and databases, remains under-explored. This paper specifically aims to study the fact verification given semi-structured data as evidence. To this end, we construct a large-scale dataset called TabFact with 16k Wikipedia tables as the evidence for 118k human-annotated natural language statements, which are labeled as either ENTAILED or REFUTED. TabFact is challenging since it involves both soft linguistic reasoning and hard symbolic reasoning. To address these reasoning challenges, we design two different models: Table-BERT and Latent Program Algorithm (LPA). Table-BERT leverages the state-of-the-art pre-trained language model to encode the linearized tables and statements into continuous vectors for verification. LPA parses statements into programs and executes them against the tables to obtain the returned binary value for verification. Both methods achieve similar accuracy but still lag far behind human performance. We also perform a comprehensive analysis to demonstrate great future opportunities. The data and code of the dataset are provided in \url{}.

PDF Abstract ICLR 2020 PDF ICLR 2020 Abstract


Introduced in the Paper:


Used in the Paper:

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Table-based Fact Verification TabFact Table-BERT-Horizontal-T+F-Template Test 65.12 # 5
Val 66.1 # 5
Table-based Fact Verification TabFact BERT classifier w/o Table Test 50.5 # 6
Val 50.9 # 6


No methods listed for this paper. Add relevant methods here