IEEE Access (Jan 2021)

Automatic Recognition of Traffic Signs Based on Visual Inspection

  • Shouhui He,
  • Lei Chen,
  • Shaoyun Zhang,
  • Zhuangxian Guo,
  • Pengjie Sun,
  • Hong Liu,
  • Hongda Liu

DOI
https://doi.org/10.1109/ACCESS.2021.3059052
Journal volume & issue
Vol. 9
pp. 43253 – 43261

Abstract

Read online

The automatic recognition of traffic signs is essential to autonomous driving, assisted driving, and driving safety. Currently, convolutional neural network (CNN) is the most popular deep learning algorithm in traffic sign recognition. However, the CNN cannot capture the poses, perspectives, and directions of the image, nor accurately recognize traffic signs from different perspectives. To solve the problem, the authors presented an automatic recognition algorithm for traffic signs based on visual inspection. For the accuracy of visual inspection, a region of interest (ROI) extraction method was designed through content analysis and key information recognition. Besides, a Histogram of Oriented Gradients (HOG) method was developed for image detection to prevent projection distortion. Furthermore, a traffic sign recognition learning architecture was created based on CapsNet, which relies on neurons to represent target parameters like dynamic routing, path pose and direction, and effectively capture the traffic sign information from different angles or directions. Finally, our model was compared with several baseline methods through experiments on LISA (Laboratory for Intelligent and Safe Automobiles) traffic sign dataset. The model performance was measured by mean average precision (MAP), time, memory, floating point operations per second (FLOPS), and parameter number. The results show that our model consumed shorter time yet better recognition performance than baseline methods, including CNN, support vector machine (SVM), and region-based fully convolutional network (R-FCN) ResNet 101.

Keywords