International Journal of Digital Earth (Dec 2024)

How well do the volunteers label land cover types in manual interpretation of remote sensing imagery?

  • Yan Wang,
  • Chenxi Li,
  • Xueyi Liu,
  • Hongdong Li,
  • Zhiying Yao,
  • Yuanyuan Zhao

DOI
https://doi.org/10.1080/17538947.2024.2347443
Journal volume & issue
Vol. 17, no. 1

Abstract

Read online

ABSTRACTHigh-quality samples for training and validation are crucial for land cover classification, especially in some complex scenarios. The reliability, representativeness, and generalizability of the sample set are important for further researches. However, manual interpretation is subjective and prone to errors. Therefore, this study investigated the following questions: (1) How much difference is there in the interpreters’ performance across educational levels? (2) Do the accuracies of human and AI (Artificial Intelligence) improve with increased training and supporting material? (3) How sensitive are the accuracies of land cover types to different supporting material? (4) Does interpretation accuracy change with interpreters’ consistency? The experiment involved 50 interpreters completing five cycles of manual image interpretation. Higher educational background interpreters showed better performance: accuracies pre-training at 52.22% and 58.61%, post-training at 61.13% and 70.21%. Accuracy generally increased with more supporting material. Ultra-high-resolution images and background knowledge contributed the most to accuracy improvement, while the time series of normalized difference vegetation index (NDVI) contributed the least. Group consistency was a reliable indicator of volunteer sample reliability. In the case of limited samples, AI was not as good as manual interpretation. To ensure quality in samples through manual interpretation, we recommend inviting educated volunteers, providing training, preparing effective support material, and filtering based on group consistency.

Keywords