PeerJ Computer Science (Oct 2024)

Field-road classification for agricultural vehicles in China based on pre-trained visual model

  • Xiaoqiang Zhang,
  • Ying Chen

DOI
https://doi.org/10.7717/peerj-cs.2359
Journal volume & issue
Vol. 10
p. e2359

Abstract

Read online Read online

Field-road classification that automatically identifies the activity (either in-field or on-road) of each point in Global Navigation Satellite System (GNSS) trajectories is a critical process in the behavior analysis of agricultural vehicles. To capture movement patterns specific to agricultural operations, we propose a multi-view field-road classification method, which extracts a physical and a visual feature vector to represent a trajectory point. We propose a task-specific approach using a pre-trained visual model to effectively extract visual features. Firstly, an image is generated based on a point plus its neighboring points to provide the contextual information of the point. Then, an image recognition model, a fine-tuned ResNet model is developed using the pretraining-finetuning paradigm. In such a paradigm, a pre-training process is used to train an image recognition model (ResNet) with natural image datasets (e.g., ImageNet), and a fine-tuning process is applied to update the parameters of the pre-trained model using the trajectory point images, enabling the model to have both general knowledge and task-specific knowledge. Finally, a visual feature is extracted for a point by the fine-tuned model, thereby overcoming the limitations caused by the small-scale generated images. To validate the effectiveness of our multi-view field-road classification, we conducted experiments on four trajectory datasets (Wheat 2021, Paddy, Wheat 2023, and Wheat 2024). The results demonstrated that the proposed method achieves competitive accuracy performance, i.e., 92.56%, 87.91%, 90.31%, and 94.23% on four trajectory datasets, respectively. Extensive experiments demonstrate that our approach can consistently perform better than the existing state-of-the-art method on the four trajectory datasets by 2.99%, 4.42%, 2.88%, and 2.77% in the F1-score, respectively. In addition, we conduct an in-depth analysis to verify the necessity and effectiveness of our method.

Keywords