IEEE Access (Jan 2023)

Efficient Attention-Convolution Feature Extractor in Semantic Segmentation for Autonomous Driving Systems

  • Seyed-Hamid Mousavi,
  • Mahdi Seyednezhad,
  • Kin-Choong Yow

DOI
https://doi.org/10.1109/ACCESS.2023.3324600
Journal volume & issue
Vol. 11
pp. 142146 – 142161

Abstract

Read online

Deep learning has been widely used in computer vision applications and it has been shown to achieve state-of-the-art results in many applications including self-driving cars. Despite the great progress, less attention has been paid to the safety-level importance of different classes and the majority of the models treat all classes similarly, and only average precision is considered. However, different classes contribute differently to the reliability and safety level of an autonomous driving system i.e. the Person class should be of higher priority than the Sky class in terms of segmentation accuracy. So, in this work, we introduced a new Attention-Convolution Block (ACB) feature extractor with modified self-attention, which can extract detailed and long-range information from the input feature maps and feed the entire network with more focused feature maps. Based on this feature extractor, we developed two models for semantic segmentation that have a balanced trade-off between complexity and accuracy and can accurately distinguish important classes, like the Person class. To demonstrate the performance of our models, we ran our experiments on Cityscapes datasets, and used both quantitative (mean and per-class IoU score) and qualitative (visual representation of output segmentation maps) measures and compared the results of our model with the state-of-the-art methods. The results show that our proposed model improves Per-class IoU scores for Person and Bike classes at least by 7 percent. In addition, we compared the accuracy of different models against their complexity to show despite simple structure and low parameter number, our proposed model has high IoU accuracy.

Keywords