Scientific Reports (Apr 2020)

Abdominal multi-organ auto-segmentation using 3D-patch-based deep convolutional neural network

  • Hojin Kim,
  • Jinhong Jung,
  • Jieun Kim,
  • Byungchul Cho,
  • Jungwon Kwak,
  • Jeong Yun Jang,
  • Sang-wook Lee,
  • June-Goo Lee,
  • Sang Min Yoon

DOI
https://doi.org/10.1038/s41598-020-63285-0
Journal volume & issue
Vol. 10, no. 1
pp. 1 – 9

Abstract

Read online

Abstract Segmentation of normal organs is a critical and time-consuming process in radiotherapy. Auto-segmentation of abdominal organs has been made possible by the advent of the convolutional neural network. We utilized the U-Net, a 3D-patch-based convolutional neural network, and added graph-cut algorithm-based post-processing. The inputs were 3D-patch-based CT images consisting of 64 × 64 × 64 voxels designed to produce 3D multi-label semantic images representing the liver, stomach, duodenum, and right/left kidneys. The datasets for training, validating, and testing consisted of 80, 20, and 20 CT simulation scans, respectively. For accuracy assessment, the predicted structures were compared with those produced from the atlas-based method and inter-observer segmentation using the Dice similarity coefficient, Hausdorff distance, and mean surface distance. The efficiency was quantified by measuring the time elapsed for segmentation with or without automation using the U-Net. The U-Net-based auto-segmentation outperformed the atlas-based auto-segmentation in all abdominal structures, and showed comparable results to the inter-observer segmentations especially for liver and kidney. The average segmentation time without automation was 22.6 minutes, which was reduced to 7.1 minutes with automation using the U-Net. Our proposed auto-segmentation framework using the 3D-patch-based U-Net for abdominal multi-organs demonstrated potential clinical usefulness in terms of accuracy and time-efficiency.