We investigate the use of machine learning classifiers for detecting online abuse in empirical research.
We show that it is crucial to account for the influencer-level structure, and find evidence of the importance of both influencer- and content-level factors, including the number of followers each influencer has, the type of content (original posts, quotes and replies), the length and toxicity of content, and whether influencers request retweets.
Online misogyny is a pernicious social problem that risks making online platforms toxic and unwelcoming to women.
Detecting online hate is a difficult task that even state-of-the-art models struggle with.
The outbreak of COVID-19 has transformed societies across the world as governments tackle the health, economic and social costs of the pandemic.
Far-right actors are often purveyors of Islamophobic hate speech online, using social media to spread divisive and prejudiced messages which can stir up intergroup tensions and conflict.
Social and Information Networks Computers and Society Physics and Society Applications