Remote Sensing (Aug 2024)
Optical-to-SAR Translation Based on CDA-GAN for High-Quality Training Sample Generation for Ship Detection in SAR Amplitude Images
Abstract
Abundant datasets are critical to train models based on deep learning technologies for ship detection applications. Compared with optical images, ship detection based on synthetic aperture radar (SAR) (especially the high-Earth-orbit spaceborne SAR launched recently) lacks enough training samples. A novel cross-domain attention GAN (CDA-GAN) model is proposed for optical-to-SAR translation, which can generate high-quality SAR amplitude training samples of a target by optical image conversion. This high quality includes high geometry structure similarity of the target compared with the corresponding optical image and low background noise around the target. In the proposed model, the cross-domain attention mechanism and cross-domain multi-scale feature fusion are designed to improve the quality of samples for detection based on the generative adversarial network (GAN). Specifically, a cross-domain attention mechanism is designed to simultaneously emphasize discriminative features from optical images and SAR images at the same time. Moreover, a designed cross-domain multi-scale feature fusion module further emphasizes the geometric information and semantic information of the target in a feature graph from the perspective of global features. Finally, a reference loss is introduced in CDA-GAN to completely retain the extra features generated by the cross-domain attention mechanism and cross-domain multi-scale feature fusion module. Experimental results demonstrate that the training samples generated by the proposed CDA-GAN can obtain higher ship detection accuracy using real SAR data than the other state-of-the-art methods. The proposed method is generally available for different orbit SARs and can be extended to the high-Earth-orbit spaceborne SAR case.
Keywords