Proceedings of the XXth Conference of Open Innovations Association FRUCT (Nov 2022)

On Artificial Intelligence: Software and Statistical Issues

  • Manfred Sneps-Sneppe,
  • Dmitry Namiot

DOI
https://doi.org/10.5281/zenodo.7368494
Journal volume & issue
Vol. 32, no. 2
pp. 394 – 402

Abstract

Read online

This article is a kind of philosophical essay, a reflection on the difficulties that arise when looking at applications of artificial intelligence (AI) from traditional statistical data processing. In addition, it is associated with an unprecedented amount of Big Data, including the grandiose amount of software. In turn, it raises cyber security issues and requires a new approach to the new AI system auditability, requiring an answer on which statistical indicators to base AI auditability. When we discuss AI applications, it is important to distinguish between autonomy and automation, that is, whether a system is truly autonomous or merely automated. At first glance, it seems that the reason that causes of Big Data analysis failures is the difference in cultures between machine learning and statistical communities. But the reason is apparently deeper, as the statistical paradox in the Big Data example shows. At present, it is not clear whether it will be possible to invent parameters that will help meet the requirements of insurance companies for safety- and security-critical AI applications. It is possible that the two new concepts of Data Defect correlation and the Law of Large Populations discussed in the paper can serve as the starting point of the search for new measures for Big Data. We cannot remain silent about the cyber threat situation either, which makes Big Data analysis extremely difficult. The task of providing robustness of machine learning software, especially in safety- and security-critical areas, is currently beyond the competence of individual companies and even governments and is becoming a problem of international cooperation.

Keywords