Network (Jun 2022)

A Computationally Efficient Gradient Algorithm for Downlink Training Sequence Optimization in FDD Massive MIMO Systems

  • Muntadher Alsabah,
  • Marwah Abdulrazzaq Naser,
  • Basheera M. Mahmmod,
  • Sadiq H. Abdulhussain

DOI
https://doi.org/10.3390/network2020021
Journal volume & issue
Vol. 2, no. 2
pp. 329 – 349

Abstract

Read online

Future wireless networks will require advance physical-layer techniques to meet the requirements of Internet of Everything (IoE) applications and massive communication systems. To this end, a massive MIMO (m-MIMO) system is to date considered one of the key technologies for future wireless networks. This is due to the capability of m-MIMO to bring a significant improvement in the spectral efficiency and energy efficiency. However, designing an efficient downlink (DL) training sequence for fast channel state information (CSI) estimation, i.e., with limited coherence time, in a frequency division duplex (FDD) m-MIMO system when users exhibit different correlation patterns, i.e., span distinct channel covariance matrices, is to date very challenging. Although advanced iterative algorithms have been developed to address this challenge, they exhibit slow convergence speed and thus deliver high latency and computational complexity. To overcome this challenge, we propose a computationally efficient conjugate gradient-descent (CGD) algorithm based on the Riemannian manifold in order to optimize the DL training sequence at base station (BS), while improving the convergence rate to provide a fast CSI estimation for an FDD m-MIMO system. To this end, the sum rate and the computational complexity performances of the proposed training solution are compared with the state-of-the-art iterative algorithms. The results show that the proposed training solution maximizes the achievable sum rate performance, while delivering a lower overall computational complexity owing to a faster convergence rate in comparison to the state-of-the-art iterative algorithms.

Keywords