IEEE Access (Jan 2024)
Contrastive Self-Supervised Learning for Globally Distributed Landslide Detection
Abstract
The Remote Sensing (RS) field continuously grapples with the challenge of transforming satellite data into actionable information. This ongoing issue results in an ever-growing accumulation of unlabeled data, complicating interpretation efforts. The situation becomes even more challenging when satellite data must be used immediately to identify the effects of a natural hazard. Self-supervised learning (SSL) offers a promising approach for learning image representations without labeled data. Once trained, an SSL model can address various tasks with significantly reduced requirements for labeled data. Despite advancements in SSL models, particularly those using contrastive learning methods like MoCo, SimCLR, and SwAV, their potential remains largely unexplored in the context of instance segmentation and semantic segmentation of satellite imagery. This study integrates SwAV within an auto-encoder framework to detect landslides using deca-metric resolution multi-spectral images from the globally-distributed large-scale landslide4sense (L4S) 2022 benchmark dataset, employing only 1% and 10% of the labeled data. Our proposed SSL auto-encoder model features two modules: SwAV, which assigns features to prototype vectors to generate encoder codes, and ResNets, serving as the decoder for the downstream task. With just 1% of labeled data, our SSL model performs comparably to ten state-of-the-art deep learning segmentation models that utilize 100% of the labeled data in a fully supervised manner. With 10% of labeled data, our SSL model outperforms all ten fully supervised counterparts trained with 100% of the labeled data.
Keywords