IEEE Access (Jan 2023)
Explainable Artificial Intelligence for Patient Safety: A Review of Application in Pharmacovigilance
Abstract
Explainable AI (XAI) is a methodology that complements the black box of artificial intelligence, and its necessity has recently been highlighted in various fields. The purpose of this research is to identify studies in the field of pharmacovigilance using XAI. Though there have been many previous attempts to select papers, with a total of 781 papers being confirmed, only 25 of them manually met the selection criteria. This study presents an intuitive review of the potential of XAI technologies in the field of pharmacovigilance. In the included studies, clinical data, registry data, and knowledge data were used to investigate drug treatment, side effects, and interaction studies based on tree models, neural network models, and graph models. Finally, key challenges for several research issues for the use of XAI in pharmacovigilance were identified. Although artificial intelligence (AI) is actively used in drug surveillance and patient safety, gathering adverse drug reaction information, extracting drug-drug interactions, and predicting effects, XAI is not normally utilized. Therefore, the potential challenges involved in its use alongside future prospects should be continuously discussed.
Keywords