Big Data and Cognitive Computing (Jul 2024)
Trustworthy AI Guidelines in Biomedical Decision-Making Applications: A Scoping Review
Abstract
Recently proposed legal frameworks for Artificial Intelligence (AI) depart from some frameworks of concepts regarding ethical and trustworthy AI that provide the technical grounding for safety and risk. This is especially important in high-risk applications, such as those involved in decision-making support systems in the biomedical domain. Frameworks for trustworthy AI span diverse requirements, including human agency and oversight, technical robustness and safety, privacy and data governance, transparency, fairness, and societal and environmental impact. Researchers and practitioners who aim to transition experimental AI models and software to the market as medical devices or to use them in actual medical practice face the challenge of deploying processes, best practices, and controls that are conducive to complying with trustworthy AI requirements. While checklists and general guidelines have been proposed for that aim, a gap exists between the frameworks and the actual practices. This paper reports the first scoping review on the topic that is specific to decision-making systems in the biomedical domain and attempts to consolidate existing practices as they appear in the academic literature on the subject.
Keywords