e-Prime: Advances in Electrical Engineering, Electronics and Energy (Sep 2024)

A Novel Liver Tumor segmentation of Adverse Propagation Advanced Swin Transformer Network with Mask region-based convolutional neural networks

  • M. Kasipandi,
  • CP. Chandran,
  • S. Rajathi

Journal volume & issue
Vol. 9
p. 100632

Abstract

Read online

The diagnosis and treatment of liver diseases from computed tomography (CT) images is an indispensable task for segmentation of Liver & its tumours. Due to the uneven presence, fuzzy borders, diverse densities, shapes and sizes of lesions segmentation of liver & its tumour is a difficult task. At this point we mainly focused on deep learning algorithms for segmenting liver and its tumour from abdominal CT scan images thereafter minimising the time & energy used for a liver diseases diagnosis This study aims to classify and segment liver tumors using a novel deep learning-based model. A Mask region-based convolutional neural network (Mask R-CNN) model is proposed for multiorgan segmentation to aid esophageal radiation treatment. Due to the fact that organ boundaries may be fuzzy and organ shapes are various, original Mask R-CNN works well on natural image segmentation while leaves something to be desired on the multiorgan segmentation task. Addressing it, the advantages of this method are threefold: (1) a ROI (region of interest) generation method is presented in the RPN (region proposal network) which is able to utilize multiscale semantic features. (2) A prebackground classification subnetwork is integrated to the original mask generation branch to improve the precision of multiorgan segmentation. The segmented image is added to the Adverse Propagation Advanced Swin Transformer Network (APESTNet) to prevent overfitting. The proposed model is based on CT volume slices of patients with liver tumors and evaluated on the public 3D dataset IRCADB01.The proposed model dataset is split into training as 60% and testing as 40 % for classification. The proposed model's accuracy and recall are 99.23 % and 98.24 %, respectively.

Keywords