The free access to large-scale public databases, together with the fast progress of deep learning techniques, in particular Generative Adversarial Networks, have led to the generation of very realistic fake content with its corresponding implications towards society in this era of fake news.
One of the main reasons is that often the interpretation of the news requires the knowledge of political or social context or 'common sense', which current NLP algorithms are still missing.
Recent Social and Psychology studies show potential importance to utilize social media data: 1) Confirmation bias effect reveals that consumers prefer to believe information that confirms their existing stances; 2) Echo chamber effect suggests that people tend to follow likeminded users and form segregated communities on social media.
Social and Information Networks
First, fake news is intentionally written to mislead readers to believe false information, which makes it difficult and nontrivial to detect based on news content; therefore, we need to include auxiliary information, such as user social engagements on social media, to help make a determination.
The proliferation of fake news, i. e., news intentionally spread for misinformation, poses a threat to individuals and society.
Identifying public misinformation is a complicated and challenging task.
Ranked #5 on Fake News Detection on FNC-1
In this paper, we propose a learning algorithm for detecting visual image manipulations that is trained only using a large dataset of real photographs.
In this paper, we introduce a capsule network that can detect various kinds of attacks, from presentation attacks using printed images and replayed videos to attacks using fake videos created using deep learning.
Although Generative Adversarial Network (GAN) can be used to generate the realistic image, improper use of these technologies brings hidden concerns.
One of the unique challenges for fake news detection on social media is how to identify fake news on newly emerged events.