IEEE Access (Jan 2025)

AI-Driven Resource Allocation for RIS-Assisted NOMA in IoT Networks

  • Syed M. Hamedoon,
  • Jawwad Nasar Chattha,
  • Umair Rashid,
  • S. M. Ahsan Kazmi,
  • Manuel Mazzara

DOI
https://doi.org/10.1109/access.2025.3559231
Journal volume & issue
Vol. 13
pp. 68152 – 68171

Abstract

Read online

Internet of Things (IoT) is playing a significant role in wireless communication for future applications such as smart home, smart cities, intelligent transportation, telecare, and various other applications. However, the emergence of IoT on a large scale has introduced numerous challenges to current wireless communication system connectivity, coverage and energy dissipation. We propose a Reconfigurable Intelligent Surface (RIS)-assisted downlink Non-Orthogonal Multiple Access (NOMA) for Internet of Things (IoT) network, where we address the challenge of optimizing power allocation, RIS phase shifts, and energy efficiency. In our approach, users are first clustered based on channel gain differences and then performed optimization of resources. The primary objective is to maximize system performance through a series of optimization techniques. Initially, joint optimization of power allocation and RIS phase shifts is carried out to enhance energy efficiency, addressing the non-convexity of the problem through alternating optimization and fractional programming. Subsequently, an alternative optimization strategy is employed using the Karush-Kuhn-Tucker (KKT) conditions to further refine power allocation and RIS phase shifts, aiming to maximize the effective throughput across the transmission period. The deployment of machine learning (ML) is critically important for addressing the challenges posed by the explosive growth in data volume and computational complexity, particularly in the optimization of smart 6G networks. In the final phase, we introduce a deep learning (DL) and reinforcement learning (RL) approach to jointly optimize power allocation and RIS phase shifts in dynamic environments. The DL approach demonstrates superior performance in terms of system sum rate, especially under varying network conditions, while the RL approach excels in long-term reward optimization. Numerical results validate the proposed framework, showing significant improvements in both sum rate and energy efficiency.

Keywords