IEEE Access (Jan 2024)
Traffic Sign Detection Under Adverse Environmental Conditions Based on CNN
Abstract
A strong and reliable Traffic Sign Detection and Recognition (TSDR) system is essential for the effective deployment of autonomous driving technology. In this field, numerous scholars have conducted extensive research, but current studies only consider TSDR under ideal conditions, neglecting scenarios such as rain, snow, fog, which can cause image blurring. This paper investigates the challenges of TSDR performance degradation caused by five adverse environmental conditions: rain, snow, fog, lens dirt, and lens blur. To overcome the adverse effects of these conditions on TSDR, this paper proposes a Convolutional Neural Network (CNN)-based TSDR method, consisting of three modules: adverse environment classification module, image enhancement module, and traffic sign detection module. The adverse environment classifier, based on the VGG19 architecture, identifies whether the image includes the aforementioned five adverse weather conditions. The image enhancement module named Enhance-Net enhances each of the five adverse environments separately and specifically enhances the traffic sign regions within the image, rather than the entire image area. To increase the speed of the proposed method, the traffic sign detection module utilizes the YOLOv4 framework. The proposed method’s effectiveness is assessed using the CURE-TSD dataset, which includes traffic videos recorded in various adverse environmental conditions. Experimental results demonstrate that under five different levels of adverse environments, the pro-posed method achieves 95.03% accuracy and runs at a rate of 12.79 fps (frames per second). In contrast to the current benchmark, although there is a 2.81% reduction in accuracy resulting from training the proposed method on a subset of the dataset, the speed has increased by 12.03 fps, demonstrating the efficacy of the proposed approach.
Keywords