IET Image Processing (Nov 2024)

Dual knowledge‐guided two‐stage model for precise small organ segmentation in abdominal CT images

  • Tao Liu,
  • Xukun Zhang,
  • Zhongwei Yang,
  • Minghao Han,
  • Haopeng Kuang,
  • Shuwei Ma,
  • Le Wang,
  • Xiaoying Wang,
  • Lihua Zhang

DOI
https://doi.org/10.1049/ipr2.13221
Journal volume & issue
Vol. 18, no. 13
pp. 3935 – 3949

Abstract

Read online

Abstract Multi‐organ segmentation from abdominal CT scans is crucial for various medical examinations and diagnoses. Despite the remarkable achievements of existing deep‐learning‐based methods, accurately segmenting small organs remains challenging due to their small size and low contrast. This article introduces a novel knowledge‐guided cascaded framework that utilizes two types of knowledge—image intrinsic (anatomy) and clinical expertise (radiology)—to improve the segmentation accuracy of small abdominal organs. Specifically, based on the anatomical similarities in abdominal CT scans, the approach employs entropy‐based registration techniques to map high‐quality segmentation results onto inaccurate results from the first stage, thereby guiding precise localization of small organs. Additionally, inspired by the practice of annotating images from multiple perspectives by radiologists, novel Multi‐View Fusion Convolution (MVFC) operator is developed, which can extract and adaptively fuse features from various directions of CT images to refine segmentation of small organs effectively. Simultaneously, the MVFC operator offers a seamless alternative to conventional convolutions within diverse model architectures. Extensive experiments on the Abdominal Multi‐Organ Segmentation (AMOS) dataset demonstrate the superiority of the method, setting a new benchmark in the segmentation of small organs.

Keywords