2 papers with code • 1 benchmarks • 2 datasets
Verifying facts given semi-structured data.
To be able to use long examples as input of BERT models, we evaluate table pruning techniques as a pre-processing step to drastically improve the training and prediction efficiency at a moderate drop in accuracy.
Ranked #1 on Table-based Fact Verification on TabFact
To this end, we construct a large-scale dataset called TabFact with 16k Wikipedia tables as the evidence for 118k human-annotated natural language statements, which are labeled as either ENTAILED or REFUTED.
Ranked #2 on Table-based Fact Verification on TabFact