IEEE Access (Jan 2024)
Developing a Transparent Diagnosis Model for Diabetic Retinopathy Using Explainable AI
Abstract
Diabetic retinopathy is a leading cause of vision complications and partially sighted which pose considerable diagnostic difficulties because of its diverse and varying symptoms. Some of them include the fact that the disease displays a non-uniform pattern, where patients present different symptoms; the requirement of highly qualified specialists to interpret the images of the fundus; the risk of errors in the interpretation of images or their inconsistency; and the absence of clear morphological signs often makes early diagnosis unlikely. Traditional diagnosis mostly rely on the expert interpretation of retinal images, which can lead to bias and inaccuracy; highlighting the need for improved diagnostic methods. Although traditional Artificial Intelligence (AI) methods enhance the diagnostic capabilities remarkably, their black box nature and information opacity restrict healthcare providers to comprehend the reasoning framework of the AI to build trust and optimize its usage in practice. Explainable AI (XAI) is an emerging approach that addresses the black-box problem by improving the interpretability of models, which allows users to understand the logic behind certain decisions. This research proposed a diagnosis model for detecting diabetic retinopathy using XAI approaches that increases the interpretability of the models to help clinicians understand the reasons behind the decisions. The proposed model is used to enhance diagnostic accuracy, offer comprehensible, and concise insights regarding the diagnostics. The convergence history plots of the proposed model validate the learning process to achieve 94% better diagnostic accuracy than traditional methods while improving interpretability and applicability in healthcare settings, indicating improvement in accuracy and loss reduction.
Keywords