Exploring Summarization to Enhance Headline Stance Detection

The spread of fake news and misinformation is causing serious problems to society, partly due to the fact that more and more people only read headlines or highlights of news assuming that everything is reliable, instead of carefully analysing whether it can contain distorted or false information. Specifically, the headline of a correctly designed news item must correspond to a summary of the main information of that news item. Unfortunately, this is not always happening, since various interests, such as increasing the number of clicks as well as political interests can be behind of the generation of a headlines that does not meet its intended original purpose. This paper analyses the use of automatic news summaries to determine the stance (i.e., position) of a headline with respect to the body of text associated with it. To this end, we propose a two-stage approach that uses summary techniques as input for both classifiers instead of the full text of the news body, thus reducing the amount of information that must be processed while maintaining the important information. The experimentation has been carried out using the Fake News Challenge FNC-1 dataset, leading to a 94.13% accuracy, surpassing the state of the art. It is especially remarkable that the proposed approach, which uses only the relevant information provided by the automatic summaries instead of the full text, is able to classify the different stance categories with very competitive results, so it can be concluded that the use of the automatic extractive summaries has a positive impact for determining the stance of very short information (i.e., headline, sentence) with respect to its whole content.



Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Fake News Detection FNC-1 SepĂșlveda-Torres R., Vicente M., Saquete E., Lloret E., Palomar M. (2021) Weighted Accuracy 90.73 # 1
Per-class Accuracy (Agree) 75.03 # 2
Per-class Accuracy (Disagree) 63.41 # 2
Per-class Accuracy (Discuss) 85.97 # 2
Per-class Accuracy (Unrelated) 99.36 # 1


No methods listed for this paper. Add relevant methods here