IEEE Access (Jan 2023)
Cross-Domain Translation Learning Method Utilizing Autoencoder Pre-Training for Super-Resolution of Radar Sparse Sensor Arrays
Abstract
This study proposes a convolutional neural network based image translation framework to faithfully realize the super-resolution of radar sensor sparse measurement. We exploited the Autoencoder model able to learn a latent representation of input domain data via the iteration of encoding and decoding processes. Based on the belief that the low resolution data retains an essential measurement to fully represent the sparse information of the high resolution counterpart, we assumed that input (source domain of low-resolution denoted by $X$ ) and output (target domain of high-resolution denoted by $Y$ ) share a common latent code. On the other hand, from the observation that the recovery quality of $Y$ -to- $Y$ translation is far more favorable than the outcome from $X$ -to- $Y$ translation, we performed the pre-training for finding the latent code ( $z_{y}$ ) of the $Y$ domain data, and subsequently the acquired $z_{y}$ serving as a label for predicting the latent code ( $z_{x}$ ) of $X$ domain data. As a result, in our proposed network, encoder and decoder parts were trained with separate minimization strategies relative to respective loss functions. For the sake of verifying the effect of the pre-training scheme, we introduced hyper-parameters, $\alpha $ and $\beta $ , to regulate the training rate for the encoder and decoder parts respectively. As metrics for quantitative assessment, detection rate (DR), false alarm rate (FAR) and ROC curve were utilized. Our network set with 0.9 of $\alpha $ and 0.1 of $\beta $ showed a higher ROC curve plot across the whole range of DR and FAR than a basic Autoencoder. DRs of the proposed network and comparative model were respectively 92.23% and 91.00% at FAR of 2%.
Keywords