Combining Similarity Features and Deep Representation Learning for Stance Detection in the Context of Checking Fake News

2 Nov 2018  ·  Luís Borges, Bruno Martins, Pável Calado ·

Fake news are nowadays an issue of pressing concern, given their recent rise as a potential threat to high-quality journalism and well-informed public discourse. The Fake News Challenge (FNC-1) was organized in 2017 to encourage the development of machine learning-based classification systems for stance detection (i.e., for identifying whether a particular news article agrees, disagrees, discusses, or is unrelated to a particular news headline), thus helping in the detection and analysis of possible instances of fake news. This article presents a new approach to tackle this stance detection problem, based on the combination of string similarity features with a deep neural architecture that leverages ideas previously advanced in the context of learning efficient text representations, document classification, and natural language inference. Specifically, we use bi-directional Recurrent Neural Networks, together with max-pooling over the temporal/sequential dimension and neural attention, for representing (i) the headline, (ii) the first two sentences of the news article, and (iii) the entire news article. These representations are then combined/compared, complemented with similarity features inspired on other FNC-1 approaches, and passed to a final layer that predicts the stance of the article towards the headline. We also explore the use of external sources of information, specifically large datasets of sentence pairs originally proposed for training and evaluating natural language inference methods, in order to pre-train specific components of the neural network architecture (e.g., the RNNs used for encoding sentences). The obtained results attest to the effectiveness of the proposed ideas and show that our model, particularly when considering pre-training and the combination of neural representations together with similarity features, slightly outperforms the previous state-of-the-art.

PDF Abstract

Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Fake News Detection FNC-1 Bi-LSTM (max-pooling, attention) Weighted Accuracy 82.23 # 4
Per-class Accuracy (Agree) 51.34 # 3
Per-class Accuracy (Disagree) 10.33 # 3
Per-class Accuracy (Discuss) 81.52 # 4
Per-class Accuracy (Unrelated) 96.74 # 4
Natural Language Inference MultiNLI Stacked Bi-LSTMs (shortcut connections, max-pooling, attention) Matched 70.7 # 39
Mismatched 70.5 # 33
Natural Language Inference MultiNLI Bi-LSTM sentence encoder (max-pooling) Matched 70.7 # 39
Mismatched 71.1 # 32
Natural Language Inference MultiNLI Stacked Bi-LSTMs (shortcut connections, max-pooling) Matched 71.4 # 37
Mismatched 72.2 # 29
Natural Language Inference SNLI Stacked Bi-LSTMs (shortcut connections, max-pooling, attention) % Test Accuracy 84.4 # 81
Natural Language Inference SNLI Bi-LSTM sentence encoder (max-pooling) % Test Accuracy 84.5 # 79
Natural Language Inference SNLI Stacked Bi-LSTMs (shortcut connections, max-pooling) % Test Accuracy 84.8 # 76


No methods listed for this paper. Add relevant methods here