IEEE Access (Jan 2021)

Wireless Access Control in Edge-Aided Disaster Response: A Deep Reinforcement Learning-Based Approach

  • Hang Zhou,
  • Xiaoyan Wang,
  • Masahiro Umehira,
  • Xianfu Chen,
  • Celimuge Wu,
  • Yusheng Ji

DOI
https://doi.org/10.1109/ACCESS.2021.3067662
Journal volume & issue
Vol. 9
pp. 46600 – 46611

Abstract

Read online

The communication infrastructure is most likely to be damaged after a major disaster occurred, which would lead to further chaos in the disaster stricken area. Modern rescue activities heavily rely on the wireless communications, such as safety status report, disrupted area monitoring, evacuation instruction, rescue coordination, etc. Large amount of data generated from victims, sensors and responders must be delivered and processed in a fast and reliable way, even when the normal communication infrastructure is degraded or destroyed. To this end, reconstructing the post-disaster network by deploying MDRU (Movable and Deployable Resource Unit) and relay unit at edge is a very promising solution. However, the optimal wireless access control in this heterogeneous hastily formed network is extremely challenging, due to the frequent varying environment and the lack of statistics information in advance in post-disaster scenarios. In this paper, we propose a learning based wireless access control approach for edge-aided disaster response network. More specifically, we model the wireless access control procedure as a discrete-time single agent Markov decision process, and solve the problem by exploiting deep reinforcement learning technique. By extensive simulation results, we show that the proposed mechanism significantly outperforms the baseline schemes in terms of delay and packet drop rate.

Keywords