Open Issues in Combating Fake News: Interpretability as an Opportunity

4 Apr 2019  ·  Sina Mohseni, Eric Ragan, Xia Hu ·

Combating fake news needs a variety of defense methods. Although rumor detection and various linguistic analysis techniques are common methods to detect false content in social media, there are other feasible mitigation approaches that could be explored in the machine learning community. In this paper, we present open issues and opportunities in fake news research that need further attention. We first review different stages of the news life cycle in social media and discuss core vulnerability issues for news feed algorithms in propagating fake news content with three examples. We then discuss how complexity and unclarity of the fake news problem limit the advancements in this field. Lastly, we present research opportunities from interpretable machine learning to mitigate fake news problems with 1) interpretable fake news detection and 2) transparent news feed algorithms. We propose three dimensions of interpretability consisting of algorithmic interpretability, human interpretability, and the inclusion of supporting evidence that can benefit fake news mitigation methods in different ways.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods