Jisuanji kexue yu tansuo (Jun 2021)

Traffic Sign Semantic Segmentation Based on Convolutional Neural Network

  • MA Yu, ZHANG Liguo, DU Huimin, MAO Zhili

DOI
https://doi.org/10.3778/j.issn.1673-9418.2005060
Journal volume & issue
Vol. 15, no. 6
pp. 1114 – 1121

Abstract

Read online

Image semantic segmentation is a necessary part of modern autonomous driving systems, because real-time and accurate capture of road condition information is the key to navigation and action planning. Traffic signs are important road condition information. The traffic sign semantic algorithm with stable performance, high real-time performance and accuracy that can meet the application needs is the basis for the realization of active safe driving systems and automatic driving systems. First, based on the analysis of actual application needs, the GTSDB database is selected as the original data, and a traffic sign data set that can comprehensively evaluate the perfor-mance of the semantic segmentation algorithm is designed. Then, based on the classical semantic segmentation network with stable performance U-Net, the D-Unet (D means dilated convolution), a deep neural network structure is proposed with better segmentation performance and higher real-time performance for small targets such as traffic signs. This method uses fewer pooling layers to retain more image information, and uses dilated convolution instead of conventional convolution to expand the receptive field of convolution, better overall planning global information. Finally, tested on the data set designed in this paper, compared with FCN-8s, SegNet, U-Net and other image segmentation network models, the mean intersection over union (MIoU) of the model is increased by about 11.9 percentage points, 6.09 percentage points and 3.71 percentage points, and the parameter amount is only 4.94%, 22.5% and 85.5% of the other three network models.

Keywords