Правовое государство: теория и практика (Sep 2024)
CIVIL LIABILITY FOR HARM CAUSED BY THE USE OF ARTIFICIAL INTELLIGENCE
Abstract
In Russia, since the early 2000s, a largescale work has been carried out to digitize all areas of state activity, which started with the adoption of the Federal Target Program «Electronic Russia» in 2002 and was in effect from 2002 to 2010. In fact, we can talk about the beginning of the process of global digitalization of every person and his life. Currently, these processes are taking place within the framework of the implementation of the National Program «Digital Economy of the Russian Federation» in the following areas: – Normative regulation of the digital environment; – Digital economy personnel; – Information infr astructure; – Information securit y; – Digital technologies; – Digital public administr ation; – Artificial intelligence; – Development of human resources in the IT industry; – Providing Internet access through the development of sat ellite communications. Of particular scientific interest are the issues of artificial intelligence technologies application, as the most, in our opinion, perspective and promising direction that can fundamentally change (improve) the quality of people’s lives. However, the introduction of any new technologies carries certain risks of negative consequences, and, accordingly, there are issues of liability for harm caused by the use of these technologies. Purpose: to analyze the issues of civil liability for harm caused by the use of artificial intelligence. Methods: empirical methods of comparison, description, interpretation; theoretical methods of formal and dialectical logic; specific scientific methods: legal-dogmatic and interpretation of legal norms. Results: the study identifies problematic aspects of the use of artificial intelligence in terms of legal regulation of harm caused when using this technology. The author presents a number of proposals for discussion by the scientific community, the implementation of which will minimize the risks of harm when using artificial intelligence or reduce the consequences of such harm.
Keywords