Gong-kuang zidonghua (Nov 2023)

Research on super-resolution reconstruction of mine images

  • WANG Yuanbin,
  • LIU Jia,
  • GUO Yaru,
  • WU Bingchao

DOI
https://doi.org/10.13272/j.issn.1671-251x.2023080081
Journal volume & issue
Vol. 49, no. 11
pp. 76 – 83, 120

Abstract

Read online

Due to the impact of high dust and low illumination in underground environments, mine images have problems such as low resolution and blurry details. When existing image super-resolution reconstruction algorithms are applied to mine images, it is difficult to obtain image information at different scales. The network parameters are too large, which affects the reconstruction speed. The reconstructed images are prone to problems such as detail loss, blurry edge contours, and artifacts. A mine image super-resolution reconstruction algorithm based on multi-scale dense channel attention super-resolution generative adversarial network (SRGAN) is proposed. A multi-scale dense channel attention residual block is designed to replace the original residual block of SRGAN. Two parallel dense connected blocks with different convolutional kernel sizes are used to fully obtain image features. The efficient channel attention modules are integrated to enhance attention to high-frequency information. The depthwise separable convolution is used to lighten the network and suppress the increase of network parameters. The texture loss constraint network training is utilized to avoid artifacts during network deepening. Experiments are conducted on the proposed mine image super-resolution reconstruction algorithm and classic super-resolution reconstruction algorithms BICUBIC, SRCNN, SRRESNET, SRGAN on both underground and public datasets. The results show that the proposed algorithm outperformed the comparative algorithm in both subjective and objective evaluations. Compared to SRGAN, the proposed algorithm reduces network parameters by 2.54%. Compared to the average index values of the classic algorithms, the peak signal-to-noise ratio and structural similarity of the proposed algorithm increase by 0.764 dB and 0.053 58 respectively. It can better focus on the texture, contour and other details of the image, and the reconstructed image is more in line with human vision.

Keywords