The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences (May 2022)

FEW SHOT CROP MAPPING USING TRANSFORMERS AND TRANSFER LEARNING WITH SENTINEL-2 TIME SERIES: CASE OF KAIROUAN TUNISIA

  • M. K. Keraani,
  • K. Mansour,
  • B. Khlaifia,
  • N. Chehata

DOI
https://doi.org/10.5194/isprs-archives-XLIII-B3-2022-899-2022
Journal volume & issue
Vol. XLIII-B3-2022
pp. 899 – 906

Abstract

Read online

In this paper, we present an approach to land cover mapping from Sentinel-2 (S-2) satellite image time series using deep learning methods in the context of few shots in agricultural areas which aims to learn a classifier to recognize unseen classes during training with limited labelled examples. In many countries, there is a lack of Land Parcel Information Systems (LPIS) and thus of agricultural crop type annotations. Annotations are still based on fastidious digitization of parcels and in-field observations that are available in few numbers. Our idea is to transfer learning from pre-trained models on existing LPIS in France and apply them to a different geographical area in Kairouan in Central Tunisia. We build on work employing multi-headed self-attention mechanisms that have contributed to results that outperform other deep learning algorithms such as convolutional neural networks (CNNs), recurrent neural networks (RNNs) in agricultural context using S-2 Time series. We used two transformer-based deep learning models PSE-TAE (Pixel-Set Encoders + Temporal Self-Attention) and PSE-LTAE (Pixel-Set Encoders + Lightweight Temporal Self-Attention). We first studied their generalisation capacity in a few shot context and on different geographical study site. Then, by transferring the knowledge of these models and adapting them to the Tunisian context with the transfer learning techniques we have demonstrated experimentally that the adaptation of these methods is efficient for land cover mapping in agricultural areas with few in-field observations in terms of accuracy with an overall accuracy for both models reaching almost 93% for a detailed classification level with 17 classes.