Applied Sciences (Mar 2024)

Do Humans and Convolutional Neural Networks Attend to Similar Areas during Scene Classification: Effects of Task and Image Type

  • Romy Müller,
  • Marcel Dürschmidt,
  • Julian Ullrich,
  • Carsten Knoll,
  • Sascha Weber,
  • Steffen Seitz

DOI
https://doi.org/10.3390/app14062648
Journal volume & issue
Vol. 14, no. 6
p. 2648

Abstract

Read online

Deep neural networks are powerful image classifiers but do they attend to similar image areas as humans? While previous studies have investigated how this similarity is shaped by technological factors, little is known about the role of factors that affect human attention. Therefore, we investigated the interactive effects of task and image characteristics. We varied the intentionality of the tasks used to elicit human attention maps (i.e., spontaneous gaze, gaze-pointing, manual area selection). Moreover, we varied the type of image to be categorized (i.e., singular objects, indoor scenes consisting of object arrangements, landscapes without distinct objects). The human attention maps generated in this way were compared to the attention maps of a convolutional neural network (CNN) as revealed by a method of explainable artificial intelligence (Grad-CAM). The influence of human tasks strongly depended on image type: for objects, human manual selection produced attention maps that were most similar to CNN, while the specific eye movement task had little impact. For indoor scenes, spontaneous gaze produced the least similarity, while for landscapes, similarity was equally low across all human tasks. Our results highlight the importance of taking human factors into account when comparing the attention of humans and CNN.

Keywords