IEEE Access (Jan 2020)

Investigating and Suggesting the Evaluation Dataset for Image Classification Model

  • Saraswathi Sivamani,
  • Sun Il Chon,
  • Do Yeon Choi,
  • Ji Hwan Park

DOI
https://doi.org/10.1109/ACCESS.2020.3024575
Journal volume & issue
Vol. 8
pp. 173599 – 173608

Abstract

Read online

Image processing systems are widespread with the digital transformation of artificial intelligence. Many researchers developed and tested several image classification models using machine learning and statistical techniques. Nevertheless, the current research seldom focuses on the quality assurance of these models. The existing methods lack to verify the quality assurance, with the lack of test cases to prepare the evaluation dataset to test the model, which can cause critical drawbacks in the nuclear field and defense system. In this article, we discuss and suggest the preparation of the evaluation dataset using improved test cases through Cause-Effect Graphing. The proposed method can generate the evaluation dataset with automated test cases through the quantification method, which consists of 1) image characteristic selection 2) creating the Cause-Effect graphing approach of the image with the feature, and 3) generate all possible test coverage. The testing is performed with the COCO dataset, which shows the declining prediction accuracy with the adjusted brightness and sharpness ranging between -75 to 75%, which indicates the negligence of the important characteristics in the existing test dataset. The experiment shows the prediction fails while sharpness is less than the 0%, and the brightness fails at -75% with less number of detection object between -50% and 75%. This indicates that characteristic changes affects the prediction accuracy and the number of detected objects in an image. Our approach proves the importance of the characteristic selection process for the overall image to generate a more efficient model and increase the accuracy of object detection.

Keywords