Remote Sensing (Nov 2024)
Residual-Based Implicit Neural Representation for Synthetic Aperture Radar Images
Abstract
Implicit neural representations (INRs) are a new way to represent all kinds of signals ranging from 1D audio to 3D shape signals, among which 2D images are the most widely explored due to their ubiquitous presence. Image INRs utilize a neural network to learn a continuous function that takes pixel coordinates as input and outputs the corresponding pixel values. The continuous representation of synthetic aperture radar (SAR) images using INRs has not yet been explored. Existing INR frameworks developed on natural images show reasonable performance, but this performance suffers when capturing fine details. This can be attributed to INR’s prioritization of learning inter-pixel relationships, which harms intra-pixel mapping in those regions that require fine detail. To address this, we decompose the target image into an artificial uniform noise component (intra-pixel mapping) and a residual image (inter-pixel relationships). Rather than directly learning the INRs for the target image, we propose a noise-first residual learning (NRL) method. The NRL first learns the uniform noise component, then gradually incorporates the residual into the optimization target using a sine-adjusted incrementation scheme as training progresses. Given that some SAR images inherently contain significant noise, which can facilitate learning the intra-pixel independent mapping, we propose a gradient-based dataset separation method. This method distinguishes between clean and noisy images, allowing the model to learn directly from the noisy images. Extensive experimental results show that our method achieves competitive performance, indicating that learning the intra-pixel independent mapping first, followed by the inter-pixel relationship, can enhance model performance in learning INR for SAR images.
Keywords