Remote Sensing (Feb 2023)

Water Body Extraction from Sentinel-2 Imagery with Deep Convolutional Networks and Pixelwise Category Transplantation

  • Joshua Billson,
  • MD Samiul Islam,
  • Xinyao Sun,
  • Irene Cheng

DOI
https://doi.org/10.3390/rs15051253
Journal volume & issue
Vol. 15, no. 5
p. 1253

Abstract

Read online

A common task in land-cover classification is water body extraction, wherein each pixel in an image is labelled as either water or background. Water body detection is integral to the field of urban hydrology, with applications ranging from early flood warning to water resource management. Although traditional index-based methods such as the Normalized Difference Water Index (NDWI) and the Modified Normalized Difference Water Index (MNDWI) have been used to detect water bodies for decades, deep convolutional neural networks (DCNNs) have recently demonstrated promising results. However, training these networks requires access to large quantities of high-quality and accurately labelled data, which is often lacking in the field of remotely sensed imagery. Another challenge stems from the fact that the category of interest typically occupies only a small portion of an image and is thus grossly underrepresented in the data. We propose a novel approach to data augmentation—pixelwise category transplantation (PCT)—as a potential solution to both of these problems. Experimental results demonstrate PCT’s ability to improve performance on a variety of models and datasets, achieving an average improvement of 0.749 mean intersection over union (mIoU). Moreover, PCT enables us to outperform the previous high score achieved on the same dataset without introducing a new model architecture. We also explore the suitability of several state-of-the-art segmentation models and loss functions on the task of water body extraction. Finally, we address the shortcomings of previous works by assessing each model on RGB, NIR, and multispectral features to ascertain the relative advantages of each approach. In particular, we find a significant benefit to the inclusion of multispectral bands, with such methods outperforming visible-spectrum models by an average of 4.193 mIoU.

Keywords