Measurement: Sensors (Apr 2024)

Residual learning for segmentation of the medical images in healthcare

  • Jyotirmaya Sahoo,
  • Shiv Kumar Saini,
  • Shweta singh,
  • Ashendra Kumar Saxena,
  • Sachin Sharma,
  • Aishwary Awasthi,
  • R. Rajalakshmi

Journal volume & issue
Vol. 32
p. 100998

Abstract

Read online

Medical workers can assess disease progression and create expedient treatment plans with the help of automated and accurate 3Dsegmentation of medical images. DCNNs (Deep convolution neural networks) have been widely used in this work, but their accuracy still needs to be increased, mostly due to their insufficient understanding of 3D environments. This study proposed three dimensional residual networks, ResUNet++, for precise segmentations of three-dimensional medical images where encoders, segmentation decoders, and context residual decoders are used. Two decoders are connected at scale utilizing context attention maps and context residual, the former explicitly learns inter-slice context data and the latter utilizes contexts as attention to increase segmentation accuracy. This model was assessed by using MICCAI 2018 BraTS dataset and, the Pancreas-CT dataset. The BrasTS and Pancreas-CT dataset scales were compared in terms of ET, WT, TC. Moreover, the proposed model was compared with/without boundary loss and validation dice score. The outcomes not only show how effective the suggested 3D residual learning approach is, but also show that the suggested ResUNet++ offers better accuracy compared to six of the top-ranking techniques used for segmenting tumors in the brain.

Keywords