Journal of Responsible Innovation (Jan 2023)
Ethical, political and epistemic implications of machine learning (mis)information classification: insights from an interdisciplinary collaboration between social and data scientists
Abstract
ABSTRACTMachine learning (ML) classification models are becoming increasingly popular for tackling the sheer volume and speed of online misinformation. In building these models data scientists need to make assumptions about the legitimacy and authoritativeness of the sources of ‘truth’ employed for model training and testing. This has political, ethical and epistemic implications which are rarely addressed in technical papers. Despite (and due to) their reported high performance, ML-driven moderation systems have the potential to shape public debate and create downstream negative impacts. This article presents findings from a responsible innovation (RI) inflected collaboration between science and technology studies scholars and data scientists. Following an interactive co-ethnographic process, we identify a series of algorithmic contingencies—key moments during ML model development which could lead to different future outcomes, uncertainties and harmful effects. We conclude by offering recommendations on how to address the potential failures of ML tools for combating online misinformation.
Keywords