IEEE Access (Jan 2021)

OP-convNet: A Patch Classification-Based Framework for CT Vertebrae Segmentation

  • Syed Furqan Qadri,
  • Linlin Shen,
  • Mubashir Ahmad,
  • Salman Qadri,
  • Syeda Shamaila Zareen,
  • Salabat Khan

DOI
https://doi.org/10.1109/ACCESS.2021.3131216
Journal volume & issue
Vol. 9
pp. 158227 – 158240

Abstract

Read online

Accurate vertebrae segmentation from medical images plays an important role in clinical tasks of surgical planning, diagnosis, kyphosis, scoliosis, degenerative disc disease, spondylolisthesis, and post-operative assessment. Although the structures of bone have high contrast in medical images, vertebrae segmentation is a challenging task due to its complex structure, abnormal spine curves, and unclear boundaries. In recent years, deep learning has been widely applied in the segmentation of vertebrae images. In this paper, towards a robust and automatic segmentation system, we present an overlapping patch-based convNet (OP-convNet) model for automatic vertebrae CT images segmentation. Due to the greater memory and processing costs associated with 3D convolutional neural networks, as well as the risk of over-fitting, we employ overlapping patches in segmentation tasks using 2D convNet. In the proposed vertebrae segmentation method, OP-convNet effectively keeps the local information contained in CT images. We divide CT image slices into equal-sized square overlapping patches and applied the RUS-function on these patches for class balancing to minimize computational requirements. Then, these patches are input into the model along with their corresponding ground truth patches. This method has been evaluated on publicly available CT images from the MICCAI CSI workshop challenge. The results indicate that OP-convNet has precision (PRE) of 90.1%, specificity (SPE) of 99.4%, accuracy (ACC) of 98.8%, F-score of 90.1% in terms of the patch-based classification accuracy, and BF-score of 90.2%, sensitivity (SEN) of 90.3%, Jaccard index (JAC) of 82.3%, dice similarity score (DSC) of 89.9% in terms of the segmentation accuracy that outperform previous methods across all metrics

Keywords