Conceptualizing Visual Analytic Interventions for Content Moderation

Sahaj Vaidya, Jie Cai, Soumyadeep Basu, Azadeh Naderi, Donghee Yvette Wohn, Aritra Dasgupta

View presentation:2021-10-27T13:50:00ZGMT-0600Change your timezone on the schedule page
2021-10-27T13:50:00Z
Exemplar figure, described by caption below
Modern social media platforms like Twitch, YouTube, etc., embody an open space for content creation and consumption. However, an unintended consequence of such content democratization is the proliferation of toxicity that content creators get subjected to. Commercial and volunteer content moderators play an indispensable role in identifying bad actors and minimizing the scale and degree of harmful content. Moderation tasks are often laborious, complex, and even if semi-automated, they involve high-consequence human decision making. In this paper, through an interdisciplinary collaboration among researchers from social science, human-computer interaction, and visualization, we contribute a characterization of the data-driven problems and a mapping between the needs and visual analytic tasks through a task abstraction framework.
Keywords

Social Science, Education, Humanities, Journalism, Intelligence Analysis, Knowledge Work, Task Abstractions & Application Domains

Abstract

Our work introduces a visual analytic task abstraction framework for addressing data-driven problems in proactive content moderation. We also discuss the implications of the framework for influencing the future of transparent and communicative moderation practices through visual analytic solutions. As a next step, we plan to realize our proposed visual analytic tasks within existing content moderation workflows. We will also conduct empirical studies to evaluate the effectiveness of the visual analytic interventions and the resulting human-machine interfaces in reducing the cognitive load and emotional stress of content moderators.