Journal of Responsible Technology (Jun 2024)
Enabling affordances for AI Governance
Abstract
Organizations dealing with mission-critical AI based autonomous systems may need to provide continuous risk management controls and establish means for their governance. To achieve this, organizations are required to embed trustworthiness and transparency in these systems, with human overseeing and accountability. Autonomous systems gain trustworthiness, transparency, quality, and maintainability through the assurance of outcomes, explanations of behavior, and interpretations of intent. However, technical, commercial, and market challenges during the software development lifecycle (SDLC) of autonomous systems can lead to compromises in their quality, maintainability, interpretability and explainability. This paper conceptually models transformation of SDLC to enable affordances for assurance, explanations, interpretations, and overall governance in autonomous systems. We argue that opportunities for transformation of SDLC are available through concerted interventions such as technical debt management, shift-left approach and non-ephemeral artifacts. This paper contributes to the theory and practice of governance of autonomous systems, and in building trustworthiness incrementally and hierarchically.