Remote Sensing (Nov 2024)

Self-Activated Implicit Neural Representation for Synthetic Aperture Radar Images

  • Dongshen Han,
  • Chaoning Zhang

DOI
https://doi.org/10.3390/rs16234473
Journal volume & issue
Vol. 16, no. 23
p. 4473

Abstract

Read online

Image Implicit Neural Representations (INRs) adopt a neural network to learn a continuous function for mapping the pixel coordinates to their corresponding values. This task has gained significant attention for representing images in a continuous manner. Despite substantial progress regarding natural images, there is little investigation of INRs for Synthetic Aperture Radar (SAR) images. This work takes a pioneering effort to study INRs for SAR images and finds that fine details are hard to represent. It has been shown in prior works that fine details can be easier to learn when the model weights are better initialized, which motivated us to investigate the benefits of activating the model weight before target training. The challenge of this task lies in the fact that SAR images cannot be used during the model activation stage. To this end, we propose exploiting a cross-pixel relationship of the model output, which relies on no target images. Specifically, we design a novel self-activation method by alternatively using two loss functions: a loss used to smooth out the model output, and another used for the opposite purpose. Extensive results on SAR images empirically show that our proposed method helps improve the model performance by a non-trivial margin.

Keywords