IEEE Access (Jan 2023)

Tolerate Failures of the Visual Camera With Robust Image Classifiers

  • Muhammad Atif,
  • Andrea Ceccarelli,
  • Tommaso Zoppi,
  • Andrea Bondavalli

DOI
https://doi.org/10.1109/ACCESS.2023.3237394
Journal volume & issue
Vol. 11
pp. 5132 – 5143

Abstract

Read online

Deep Neural Networks (DNNs) have become an enabling technology for building accurate image classifiers, and are increasingly being applied in many ICT systems such as autonomous vehicles. Unfortunately, classifiers can be deceived by images that are altered due to failures of the visual camera, preventing the proper execution of the classification process. Therefore, it is of utmost importance to build image classifiers that can guarantee accurate classification even in the presence of such camera failures. This study crafts classifiers that are robust to failures of the visual camera by augmenting the training set with artificially altered images that simulate the effects of such failures. Such a data augmentation approach improves classification accuracy with respect to the most common data augmentation approaches, even in the absence of camera failures. To provide experimental evidence for our claims, we exercise three DNN image classifiers on three image datasets, in which we inject the effects of many failures into the visual camera. Finally, we applied eXplainable AI to debate why classifiers trained with the data augmentation approach proposed in this study can tolerate failures of the visual camera.

Keywords