IEEE Access (Jan 2024)
OVANet: Dual Attention Mechanism Based New Deep Learning Framework for Diagnosis and Classification of Ovarian Cancer Subtypes From Histopathological Images
Abstract
This paper introduces OVANet, a new deep learning framework for the diagnosis and classification of ovarian cancer subtypes from histopathological images. OVANet has been designed by integrating modified VGG19 and InceptionV3 blocks, incorporating dual attention mechanism (Squeeze-and-Excitation and custom spatial attention) to capture rich, multi-scale, and context-aware feature representations. This unique architecture combines intermediate layers from both base models, along with additional convolutional and max-pooling layers, dual attention mechanisms and global average pooling to extract diverse image features crucial for accurate subtype differentiation. OVANet is trained and evaluated on an original dataset of 508 histopathological images, augmented to approximately 13,024 images through preprocessing and augmentation techniques like rotations, shifts, zooms, and flips. This approach significantly enhances the generalization capabilities of the model, enabling it to achieve superior performance compared to well-known pretrained models and existing research outputs. The propose model shows accuracy, precision and AUC of 99.01% along with a recall of 99.87%. Using feature map visualization of all layers of the model and a model explainibility technique GRADCam, visual representation of classification have been shown which will help to establish transparency in clinical aspect. This architecture can be considered as a significant advancement in medical imaging, especially for the diagnosis and treatment planning of ovarian cancer. To reduce the death rate of cancer, proper detection is very essential and this model exhibits so promising results that can easily implemented clinically.
Keywords