IEEE Access (Jan 2024)
A Multi-View Deep Evidential Learning Approach for Mammogram Density Classification
Abstract
Artificial intelligence algorithms, specifically deep learning, can assist radiologists by automating mammogram density assessment. However, trust in such algorithms must be established before they are widely adopted in clinical settings. In this study, we present an evidential deep learning approach called MV-DEFEAT, incorporating the strength of Dempster Shafer evidential theory and subjective logic, for the mammogram density classification task. The framework combines evidence from multiple mammograms’ views to mimic a radiologist decision making process. In this study, we utilized four open-source datasets, namely VinDr-Mammo, DDSM, CMMD, and VTB, to mitigate inherent biases and provide a diverse representation of the data. Our experimental findings demonstrate MV-DEFEAT’s superior performance in terms of weighted macro-average area under the receiver operating curves (AUCs) compared to the state-of-the-art multi-view deep learning model, referred to as MVDL. MV-DEFEAT yields a relative improvement of 12.57%, 14.51%, 19.9%, and 22.53%, on the VTB, VinDr-Mammo, CMMD, and DDSM datasets, respectively, for the mammogram density classification task. Additionally, for BIRADS classification and the classification of mammograms as benign or malignant, MV-DEFEAT exhibits substantial enhancements compared to MVDL, with a relative improvement of 31.46% and 50.78% on the DDSM and VinDr-Mammo datasets, respectively. These results underscore the efficacy of our approach. Through meticulous curation of diverse datasets and comprehensive comparative analyses, we ensure the robustness and reliability of our findings, thereby enhancing trust to adopt MV-DEFEAT framework for various mammogram assessment tasks in clinical settings.
Keywords