MALCOM: Generating Malicious Comments to Attack Neural Fake News Detection Models

1 Sep 2020  ·  Thai Le, Suhang Wang, Dongwon Lee ·

In recent years, the proliferation of so-called "fake news" has caused much disruptions in society and weakened the news ecosystem. Therefore, to mitigate such problems, researchers have developed state-of-the-art models to auto-detect fake news on social media using sophisticated data science and machine learning techniques. In this work, then, we ask "what if adversaries attempt to attack such detection models?" and investigate related issues by (i) proposing a novel threat model against fake news detectors, in which adversaries can post malicious comments toward news articles to mislead fake news detectors, and (ii) developing MALCOM, an end-to-end adversarial comment generation framework to achieve such an attack. Through a comprehensive evaluation, we demonstrate that about 94% and 93.5% of the time on average MALCOM can successfully mislead five of the latest neural detection models to always output targeted real and fake news labels. Furthermore, MALCOM can also fool black box fake news detectors to always output real news labels 90% of the time on average. We also compare our attack model with four baselines across two real-world datasets, not only on attack performance but also on generated quality, coherency, transferability, and robustness.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here