PLoS ONE (Jan 2021)

Compare the performance of the models in art classification.

  • Wentao Zhao,
  • Dalin Zhou,
  • Xinguo Qiu,
  • Wei Jiang

DOI
https://doi.org/10.1371/journal.pone.0248414
Journal volume & issue
Vol. 16, no. 3
p. e0248414

Abstract

Read online

Because large numbers of artworks are preserved in museums and galleries, much work must be done to classify these works into genres, styles and artists. Recent technological advancements have enabled an increasing number of artworks to be digitized. Thus, it is necessary to teach computers to analyze (e.g., classify and annotate) art to assist people in performing such tasks. In this study, we tested 7 different models on 3 different datasets under the same experimental setup to compare their art classification performances when either using or not using transfer learning. The models were compared based on their abilities for classifying genres, styles and artists. Comparing the result with previous work shows that the model performance can be effectively improved by optimizing the model structure, and our results achieve state-of-the-art performance in all classification tasks with three datasets. In addition, we visualized the process of style and genre classification to help us understand the difficulties that computers have when tasked with classifying art. Finally, we used the trained models described above to perform similarity searches and obtained performance improvements.