Heritage Science (Jan 2022)

Ancient mural segmentation based on a deep separable convolution network

  • Jianfang Cao,
  • Xiaodong Tian,
  • Zhiqiang Chen,
  • Leelavathi Rajamanickam,
  • Yiming Jia

DOI
https://doi.org/10.1186/s40494-022-00644-2
Journal volume & issue
Vol. 10, no. 1
pp. 1 – 17

Abstract

Read online

Abstract Traditional methods for ancient mural segmentation have drawbacks, including fuzzy target boundaries and low efficiency. Targeting these problems, this study proposes a pyramid scene parsing MobileNetV2 network (PSP-M) by fusing a deep separable convolution-based lightweight neural network with a multiscale image segmentation model. In this model, deep separable convolution-fused MobileNetV2, as the backbone network, is embedded in the image segmentation model, PSPNet. The pyramid scene parsing structure, particularly owned by the two models, is used to process the background features of images, which aims to reduce feature loss and to improve the efficiency of image feature extraction. In the meantime, atrous convolution is utilized to expand the perceptive field, aiming to ensure the integrity of image semantic information without changing the number of parameters. Compared with traditional image segmentation models, PSP-M increases the average training accuracy by 2%, increases the peak signal-to-noise ratio by 1–2 dB and improves the structural similarity index by 0.1–0.2.

Keywords