Applied Sciences (Jun 2020)

An Asphalt Damage Dataset and Detection System Based on RetinaNet for Road Conditions Assessment

  • Gilberto Ochoa-Ruiz,
  • Andrés Alonso Angulo-Murillo,
  • Alberto Ochoa-Zezzatti,
  • Lina María Aguilar-Lobo,
  • Juan Antonio Vega-Fernández,
  • Shailendra Natraj

DOI
https://doi.org/10.3390/app10113974
Journal volume & issue
Vol. 10, no. 11
p. 3974

Abstract

Read online

The analysis and follow up of asphalt infrastructure using image processing techniques has received increased attention recently. However, the vast majority of developments have focused only on determining the presence or absence of road damages, forgoing other more pressing concerns. Nonetheless, in order to be useful to road managers and governmental agencies, the information gathered during an inspection procedure must provide actionable insights that go beyond punctual and isolated measurements: the characteristics, type, and extent of the road damages must be effectively and automatically extracted and digitally stored, preferably using inexpensive mobile equipment. In recent years, computer vision acquisition systems have emerged as a promising solution for road damage automated inspection systems when integrated into georeferenced mobile computing devices such as smartphones. However, the artificial intelligence algorithms that power these computer vision acquisition systems have been rather limited owing to the scarcity of large and homogenized road damage datasets. In this work, we aim to contribute in bridging this gap using two strategies. First, we introduce a new and very large asphalt dataset, which incorporates a set of damages not present in previous studies, making it more robust and representative of certain damages such as potholes. This dataset is composed of 18,345 road damage images captured by a mobile phone mounted on a car, with 45,435 instances of road surface damages (linear, lateral, and alligator cracks; potholes; and various types of painting blurs). In order to generate this dataset, we obtained images from several public datasets and augmented it with crowdsourced images, which where manually annotated for further processing. The images were captured under a variety of weather and illumination conditions and a quality-aware data augmentation strategy was employed to filter out samples of poor quality, which helped in improving the performance metrics over the baseline. Second, we trained different object detection models amenable for mobile implementation with an acceptable performance for many applications. We performed an ablation study to assess the effectiveness of the quality-aware data augmentation strategy and compared our results with other recent works, achieving better accuracies (mAP) for all classes and lower inference times (3× faster).

Keywords