Digital Chemical Engineering (Jun 2024)
Artificial intelligence – Human intelligence conflict and its impact on process system safety
Abstract
In the Industry 4.0 revolution, industries are advancing their operations by leveraging Artificial Intelligence (AI). AI-based systems enhance industries by automating repetitive tasks and improving overall efficiency. However, from a safety perspective, operating a system using AI without human interaction raises concerns regarding its reliability. Recent developments have made it imperative to establish a collaborative system between humans and AI, known as Intelligent Augmentation (IA). Industry 5.0 focuses on developing IA-based systems that facilitate collaboration between humans and AI. However, potential conflicts between humans and AI in controlling process plant operations pose a significant challenge in IA systems. Human-AI conflict in IA-based system operation can arise due to differences in observation, interpretation, and control action. Observation conflict may arise when humans and AI disagree with the observed data or information. Interpretation conflicts may occur due to differences in decision-making based on observed data, influenced by the learning ability of human intelligence (HI) and AI. Control action conflicts may arise when AI-driven control action differs from the human operator action. Conflicts between humans and AI may introduce additional risks to the IA-based system operation. Therefore, it is crucial to understand the concept of human-AI conflict and perform a detailed risk analysis before implementing a collaborative system. This paper aims to investigate the following: 1. Human and AI operations in process systems and the possible conflicts during the collaboration. 2. Formulate the concept of observation, interpretation, and action conflict in an IA-based system. 3. Provide a case study to identify the potential risk of human-AI conflict.