Jisuanji kexue yu tansuo (Sep 2022)
Image Semantic Segmentation Method with Fusion of Transposed Convolution and Deep Residual
Abstract
Aiming at the problems of low segmentation accuracy and high loss of deep learning image semantic segmentation methods, image semantic segmentation method with fusion of transposed convolution and deep residual is proposed. Firstly, in order to solve the problems of decreasing segmentation accuracy and slow convergence speed caused by increasing of the depth of neural network, a deep residual learning module is designed to improve the training efficiency and convergence speed of the network. After that, in order to make the feature map fusion more accurate in upsampling and feature extraction process, two upsampling methods of UpSampling2D and transposed convolution in the deep residual U-net model are merged to form a new upsampling module. Finally, to solve the over-fitting of the weights between training set and validation set in the process of network training, Dropout is introduced in the skip connection layer of the improved network, which enhances learning ability of the model. The performance of algorithm is proven on the CamVid datasets. The semantic segmentation accuracy of the algorithm reaches 89.93% and the loss is reduced to 0.23. Compared with U-net model, the verification set accuracy is improved by 13.13 percentage points, and the loss is reduced by 1.20, which is better than the current image semantic segmentation methods. The proposed model of image semantic segmentation combines the advantages of U-net, which makes the image semantic segmentation more accurate, with better effect, and effectively improves the robustness of algorithm.
Keywords