Big Data & Society (Jun 2024)

Repairing the harm: Toward an algorithmic reparations approach to hate speech content moderation

  • Chelsea Peterson-Salahuddin

DOI
https://doi.org/10.1177/20539517241245333
Journal volume & issue
Vol. 11

Abstract

Read online

Content moderation algorithms influence how users understand and engage with social media platforms. However, when identifying hate speech, these automated systems often contain biases that can silence or further harm marginalized users. Recently, scholars have offered both restorative and transformative justice frameworks as alternative approaches to platform governance to mitigate harms caused to marginalized users. As a complement to these recent calls, in this essay, I take up the concept of reparation as one substantive approach social media platforms can use alongside and within these justice frameworks to take actionable steps toward addressing, undoing and proactively preventing the harm caused by algorithmic content moderation. Specifically, I draw on established legal and legislative reparations frameworks to suggest how social media platforms can reconceptualize algorithmic content moderation in ways that decrease harm to marginalized users when identifying hate speech. I argue that the concept of reparations can reorient how researchers and corporate social media platforms approach content moderation, away from capitalist impulses and efficiency and toward a framework that prioritizes creating an environment where individuals from marginalized communities feel safe, protected and empowered.