Heliyon (Apr 2024)
Expectation management in AI: A framework for understanding stakeholder trust and acceptance of artificial intelligence systems
Abstract
As artificial intelligence systems gain traction, their trustworthiness becomes paramount to harness their benefits and mitigate risks. This study underscores the pressing need for an expectation management framework to align stakeholder anticipations before any system-related activities, such as data collection, modeling, or implementation. To this end, we introduce a comprehensive framework tailored to capture end-user expectations specifically for trustworthy artificial intelligence systems. To ensure its relevance and robustness, we validated the framework via semi-structured interviews, encompassing questions rooted in the framework's constructs and principles. These interviews engaged fourteen diverse end users across the healthcare and education sectors, including physicians, teachers, and students. Through a meticulous qualitative analysis of the interview transcripts, we unearthed pivotal themes and discerned varying perspectives among the interviewee groups. Ultimately, our framework stands as a pivotal tool, paving the way for in-depth discussions about user expectations, illuminating the significance of various system attributes, and spotlighting potential challenges that might jeopardize the system's efficacy.