PLoS ONE (Jan 2024)

Res2Net-based multi-scale and multi-attention model for traffic scene image classification.

  • Guanghui Gao,
  • Yining Guo,
  • Lumei Zhou,
  • Li Li,
  • Gang Shi

DOI
https://doi.org/10.1371/journal.pone.0300017
Journal volume & issue
Vol. 19, no. 5
p. e0300017

Abstract

Read online

With the increasing applications of traffic scene image classification in intelligent transportation systems, there is a growing demand for improved accuracy and robustness in this classification task. However, due to weather conditions, time, lighting variations, and annotation costs, traditional deep learning methods still have limitations in extracting complex traffic scene features and achieving higher recognition accuracy. The previous classification methods for traffic scene images had gaps in multi-scale feature extraction and the combination of frequency domain, spatial, and channel attention. To address these issues, this paper proposes a multi-scale and multi-attention model based on Res2Net. Our proposed framework introduces an Adaptive Feature Refinement Pyramid Module (AFRPM) to enhance multi-scale feature extraction, thus improving the accuracy of traffic scene image classification. Additionally, we integrate frequency domain and spatial-channel attention mechanisms to develop recognition capabilities for complex backgrounds, objects of different scales, and local details in traffic scene images. The paper conducts the task of classifying traffic scene images using the Traffic-Net dataset. The experimental results demonstrate that our model achieves an accuracy of 96.88% on this dataset, which is an improvement of approximately 2% compared to the baseline Res2Net network. Furthermore, we validate the effectiveness of the proposed modules through ablation experiments.