No Cover Image

Journal article 295 views 63 downloads

Moderating borderline content while respecting fundamental values

Stuart Macdonald Orcid Logo, Katy Vaughan Orcid Logo

Policy & Internet, Volume: 16, Issue: 2, Pages: 347 - 361

Swansea University Authors: Stuart Macdonald Orcid Logo, Katy Vaughan Orcid Logo

  • 64730.VOR.pdf

    PDF | Version of Record

    This is an open access article under the terms of the Creative Commons Attribution License.

    Download (2.12MB)

Check full text

DOI (Published version): 10.1002/poi3.376

Abstract

As efforts to identify and remove online terrorist and violent extremist content have intensified, concern has also grown about so-called lawful but awful content. Various options have been touted for reducing the visibility of this borderline content, including removing it from search and recommend...

Full description

Published in: Policy & Internet
ISSN: 1944-2866 1944-2866
Published: Wiley 2024
Online Access: Check full text

URI: https://cronfa.swan.ac.uk/Record/cronfa64730
Tags: Add Tag
No Tags, Be the first to tag this record!
Abstract: As efforts to identify and remove online terrorist and violent extremist content have intensified, concern has also grown about so-called lawful but awful content. Various options have been touted for reducing the visibility of this borderline content, including removing it from search and recommendation algorithms, downranking it and redirecting those who search for it. This article contributes to this discussion by considering the moderation of such content, in terms of three sets of values. First, definitional clarity. This is necessary to provide users with fair warning of what content is liable to moderation and to place limits on the discretion of content moderators. Yet, at present, definitions of borderline content are vague and imprecise. Second, necessity and proportionality. While downranking and removal from search and recommender algorithms should be distinguished from deplatforming, tech companies’ efforts to deamplify borderline content give rise to many of the same concerns as content removal and account shutdowns. Third, transparency. While a number of platforms now publish their content moderation policies and transparency data reports, these largely focus on violative, not borderline content. Moreover, there remain questions around access to data for independent researchers and transparency at the level of the individual user.
Keywords: Algorithms, borderline, content, content moderation, freedom of expression, recommendation, terrorist and violent extremist content, transparency
College: Faculty of Humanities and Social Sciences
Funders: Swansea University
Issue: 2
Start Page: 347
End Page: 361