Towards Detection of Subjective Bias using Contextualized Word Embeddings

16 Feb 2020  ·  Tanvi Dadu, Kartikey Pant, Radhika Mamidi ·

Subjective bias detection is critical for applications like propaganda detection, content recommendation, sentiment analysis, and bias neutralization. This bias is introduced in natural language via inflammatory words and phrases, casting doubt over facts, and presupposing the truth. In this work, we perform comprehensive experiments for detecting subjective bias using BERT-based models on the Wiki Neutrality Corpus(WNC). The dataset consists of $360k$ labeled instances, from Wikipedia edits that remove various instances of the bias. We further propose BERT-based ensembles that outperform state-of-the-art methods like $BERT_{large}$ by a margin of $5.6$ F1 score.

PDF Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Bias Detection Wiki Neutrality Corpus RoBERTa+ALBERT F1 70.4 # 1

Methods


No methods listed for this paper. Add relevant methods here