Frontiers in Medicine (Jan 2024)

Automated wound care by employing a reliable U-Net architecture combined with ResNet feature encoders for monitoring chronic wounds

  • Maali Alabdulhafith,
  • Abduljabbar S. Ba Mahel,
  • Nagwan Abdel Samee,
  • Noha F. Mahmoud,
  • Rawan Talaat,
  • Mohammed Saleh Ali Muthanna,
  • Tamer M. Nassef

DOI
https://doi.org/10.3389/fmed.2024.1310137
Journal volume & issue
Vol. 11

Abstract

Read online

Quality of life is greatly affected by chronic wounds. It requires more intensive care than acute wounds. Schedule follow-up appointments with their doctor to track healing. Good wound treatment promotes healing and fewer problems. Wound care requires precise and reliable wound measurement to optimize patient treatment and outcomes according to evidence-based best practices. Images are used to objectively assess wound state by quantifying key healing parameters. Nevertheless, the robust segmentation of wound images is complex because of the high diversity of wound types and imaging conditions. This study proposes and evaluates a novel hybrid model developed for wound segmentation in medical images. The model combines advanced deep learning techniques with traditional image processing methods to improve the accuracy and reliability of wound segmentation. The main objective is to overcome the limitations of existing segmentation methods (UNet) by leveraging the combined advantages of both paradigms. In our investigation, we introduced a hybrid model architecture, wherein a ResNet34 is utilized as the encoder, and a UNet is employed as the decoder. The combination of ResNet34’s deep representation learning and UNet’s efficient feature extraction yields notable benefits. The architectural design successfully integrated high-level and low-level features, enabling the generation of segmentation maps with high precision and accuracy. Following the implementation of our model to the actual data, we were able to determine the following values for the Intersection over Union (IOU), Dice score, and accuracy: 0.973, 0.986, and 0.9736, respectively. According to the achieved results, the proposed method is more precise and accurate than the current state-of-the-art.

Keywords