Towards Automatic Detection of Misinformation in Online Medical Videos

4 Sep 2019  ·  Rui Hou, Verónica Pérez-Rosas, Stacy Loeb, Rada Mihalcea ·

Recent years have witnessed a significant increase in the online sharing of medical information, with videos representing a large fraction of such online sources. Previous studies have however shown that more than half of the health-related videos on platforms such as YouTube contain misleading information and biases. Hence, it is crucial to build computational tools that can help evaluate the quality of these videos so that users can obtain accurate information to help inform their decisions. In this study, we focus on the automatic detection of misinformation in YouTube videos. We select prostate cancer videos as our entry point to tackle this problem. The contribution of this paper is twofold. First, we introduce a new dataset consisting of 250 videos related to prostate cancer manually annotated for misinformation. Second, we explore the use of linguistic, acoustic, and user engagement features for the development of classification models to identify misinformation. Using a series of ablation experiments, we show that we can build automatic models with accuracies of up to 74%, corresponding to a 76.5% precision and 73.2% recall for misinformative instances.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here