IEEE Access (Jan 2024)

RotU-Net: An Innovative U-Net With Local Rotation for Medical Image Segmentation

  • Fuxiang Zhang,
  • Fengchao Wang,
  • Wenfeng Zhang,
  • Quanzhen Wang,
  • Yajun Liu,
  • Zhiming Jiang

DOI
https://doi.org/10.1109/ACCESS.2024.3363410
Journal volume & issue
Vol. 12
pp. 21114 – 21128

Abstract

Read online

In recent years, both convolutional neural networks (CNN) and transformers have demonstrated impressive feature extraction capabilities in the field of medical image segmentation. A common approach is to utilize a combination of CNN and transformer encoders to efficiently learn both local and global features, making them widely adopted techniques in semantic segmentation of medical images. However, challenges remain due to the limited sample size of medical image datasets and the intricate foreground edge information in these images. These challenges make it difficult for models to capture key structures and information related to foreground edge details, especially when trained on smaller datasets. To address these issues, we propose a U-Net-based model called “Rotate U-Net” (RotU-Net). Our model design is inspired by the successful U-Net architecture, which is characterized by direct connections between encoders and decoders, and skipping connections at multiple resolutions. Meanwhile, we propose weight rotator as a feature extraction module, which enhances network to discriminate edge information in the foreground region by computing partial element correlations to improve the network to focus on the foreground region while reducing redundant information in the features. Finally, we have validated RotU-Net on the Synapse Multi-Organ Segmentation Dataset (Synapse) and the Segmentation of Multiple Myeloma Plasma Cells in Microscopic Images (SegPC). The experimental results show that RotU-Net with a very small number of parameters achieves impressive performance, which demonstrates the effectiveness and efficiency of RotU-Net.

Keywords