Moderation Challenges in Voice-based Online Communities on Discord

13 Jan 2021  ·  Jialun Aaron Jiang, Charles Kiene, Skyler Middler, Jed R. Brubaker, Casey Fiesler ·

Online community moderators are on the front lines of combating problems like hate speech and harassment, but new modes of interaction can introduce unexpected challenges. In this paper, we consider moderation practices and challenges in the context of real-time, voice-based communication through 25 in-depth interviews with moderators on Discord. Our findings suggest that the affordances of voice-based online communities change what it means to moderate content and interactions. Not only are there new ways to break rules that moderators of text-based communities find unfamiliar, such as disruptive noise and voice raiding, but acquiring evidence of rule-breaking behaviors is also more difficult due to the ephemerality of real-time voice. While moderators have developed new moderation strategies, these strategies are limited and often based on hearsay and first impressions, resulting in problems ranging from unsuccessful moderation to false accusations. Based on these findings, we discuss how voice communication complicates current understandings and assumptions about moderation, and outline ways that platform designers and administrators can design technology to facilitate moderation.

PDF Abstract
No code implementations yet. Submit your code now

Categories


Human-Computer Interaction Computers and Society

Datasets


  Add Datasets introduced or used in this paper