"Liar, Liar Pants on Fire": A New Benchmark Dataset for Fake News Detection

ACL 2017  ·  William Yang Wang ·

Automatic fake news detection is a challenging problem in deception detection, and it has tremendous real-world political and social impacts. However, statistical approaches to combating fake news has been dramatically limited by the lack of labeled benchmark datasets. In this paper, we present liar: a new, publicly available dataset for fake news detection. We collected a decade-long, 12.8K manually labeled short statements in various contexts from PolitiFact.com, which provides detailed analysis report and links to source documents for each case. This dataset can be used for fact-checking research as well. Notably, this new dataset is an order of magnitude larger than previously largest public fake news datasets of similar type. Empirically, we investigate automatic fake news detection based on surface-level linguistic patterns. We have designed a novel, hybrid convolutional neural network to integrate meta-data with text. We show that this hybrid approach can improve a text-only deep learning model.

PDF Abstract

Datasets


Introduced in the Paper:

LIAR

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Fake News Detection LIAR Hybrid CNNs (Text + All) Validation Accuracy 0.247 # 3
Test Accuracy 0.274 # 1
Fake News Detection LIAR Hybrid CNNs (Text + Speaker) Validation Accuracy 0.277 # 1
Test Accuracy 0.248 # 3
Fake News Detection LIAR CNNs Validation Accuracy 0.26 # 2
Test Accuracy 0.27 # 2
Fake News Detection LIAR Bi-LSTMs Validation Accuracy 0.223 # 4
Test Accuracy 0.233 # 4

Methods


No methods listed for this paper. Add relevant methods here