Oslo Law Review (Oct 2024)

AI Systems and Criminal Liability<subtitle>A Call for Action</subtitle>

  • Athina Sachoulidou

DOI
https://doi.org/10.18261/olr.11.1.3
Journal volume & issue
Vol. 11, no. 1
pp. 1 – 10

Abstract

Read online

The rapid advancement and widespread adoption of artificial intelligence (AI) and other enabling technologies underscores the enduring debate over attributing criminal liability to non-human agents. At the same time, the increasing risks associated with the use of AI systems, which may amount to grave violations of legal interests, such as life, bodily integrity and privacy, raise concerns as to whether one could address AI-related offences by means of employing traditional criminal law categories. In particular, it is questioned whether commonly accepted frameworks rooted in concepts such as personhood, actus reus, causation and mens rea are adequately equipped to address criminal conduct in AI settings. This article provides an overview of the key points raised as part of this scholarly discourse and presents two primary approaches to criminal liability in the age of AI, namely the use of the ʻpermissible risk’ doctrine and the solution of introducing new endangerment offences, exploring their merits and pitfalls.

Keywords