IET Radar, Sonar & Navigation (Jul 2023)

Boosting multi‐target recognition performance with multi‐input multi‐output radar‐based angular subspace projection and multi‐view deep neural network

  • Emre Kurtoğlu,
  • Sabyasachi Biswas,
  • Ali C. Gurbuz,
  • Sevgi Zubeyde Gurbuz

DOI
https://doi.org/10.1049/rsn2.12405
Journal volume & issue
Vol. 17, no. 7
pp. 1115 – 1128

Abstract

Read online

Abstract Current radio frequency (RF) classification techniques assume only one target in the field of view. Multi‐target recognition is challenging because conventional radar signal processing results in the superposition of target micro‐Doppler signatures, making it difficult to recognise multi‐target activity. This study proposes an angular subspace projection technique that generates multiple radar data cubes (RDC) conditioned on angle (RDC‐ω). This approach enables signal separation in the raw RDC, making possible the utilisation of deep neural networks taking the raw RF data as input or any other data representation in multi‐target scenarios. When targets are in closer proximity and cannot be separated by classical techniques, the proposed approach boosts the relative signal‐to‐noise ratio between targets, resulting in multi‐view spectrograms that boosts the classification accuracy when input to the proposed multi‐view DNN. Our results qualitatively and quantitatively characterise the similarity of multi‐view signatures to those acquired in a single‐target configuration. For a nine‐class activity recognition problem, 97.8% accuracy in a 3‐person scenario is achieved, while utilising DNN trained on single‐target data. We also present the results for two cases of close proximity (sign language recognition and side‐by‐side activities), where the proposed approach has boosted the performance.

Keywords