IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing (Jan 2024)
How Does Super-Resolution for Satellite Imagery Affect Different Types of Land Cover? Sentinel-2 Case
Abstract
In the dynamic field of satellite imagery, the significance of super-resolution (SR) techniques, grounded on advanced deep learning methods, is paramount. A thorough understanding and remediation of the distinct challenges posed by various land cover types for image resolution enhancement form the essence of this research. This work diligently employs two unique neural networks, SRCNN and SwinIR Transformer, to scrutinize their varying impacts on a range of land cover types, ensuring a detailed and comprehensive exploration. This study transcends the mere enhancement of the Sentinel-2 dataset's resolution from 20 m/pix to 10 m/pix. It ambitiously seeks to excavate the intricate trends inherent to different land cover types and their corresponding interactions with SR processes. The application of neural networks on 255 × 254 pixel patches, covering six dominant types—forests, large fields, small fields, urban, sub-urban, and mixed—highlights substantial variations in metrics, underlining the individual interactions of each land cover type with SR techniques. A comprehensive accuracy assessment is meticulously conducted, employing an array of metrics and frequency domains to shed light on the nuanced differences and provide vital insights for optimizing each land cover type's SR approaches. Notably, the PSNR metric reveals significant disparities, particularly in the “forest” and “urban” categories for both SRCNN and SwinIR. According to the PSNR metric, the “forest” class yielded the best results with 66.06 for SRCNN and 67.00 for SwinIR, while the “urban” class marked the lowest with 55.09 and 57.02, respectively, reinforcing the critical nature of this study.
Keywords