IEEE Access (Jan 2024)
Cache Sharing in UAV-Enabled Cellular Network: A Deep Reinforcement Learning-Based Approach
Abstract
Caching content at base stations has proven effective at reducing transmission delays. This paper investigates the caching problem in a network of highly dynamic cache-enabled Unmanned Aerial Vehicles (UAVs), which serve ground users as aerial base stations. In this scenario, UAVs share their caches to minimize total transmission delays for requested content while simultaneously adjusting their locations. To address this challenge, we formulate a non-convex optimization problem that jointly controls UAV mobility, user association, and content caching to minimize transmission delay time. Considering the highly dynamic environment where traditional optimization approaches fall short, we propose a deep reinforcement learning (RL)-based algorithm. Specifically, we employ the actor-critic-based Deep Deterministic Policy Gradient (DDPG) algorithm to solve the optimization problem effectively. We conducted extensive simulations with respect to different cache sizes and the number of associated users with their home UAVs and compared our proposed algorithm with two baselines. Our proposed solution has demonstrated noteworthy enhancements over the two baseline approaches across various scenarios, including diverse cache sizes and varying numbers of users associated with their respective home UAVs.
Keywords