IEEE Access (Jan 2020)

Double Deep-Q Learning-Based Output Tracking of Probabilistic Boolean Control Networks

  • Antonio Acernese,
  • Amol Yerudkar,
  • Luigi Glielmo,
  • Carmen Del Vecchio

DOI
https://doi.org/10.1109/ACCESS.2020.3035152
Journal volume & issue
Vol. 8
pp. 199254 – 199265

Abstract

Read online

In this article, a reinforcement learning (RL)-based scalable technique is presented to control the probabilistic Boolean control networks (PBCNs). In particular, a double deep-Q network (DDQN) approach is firstly proposed to address the output tracking problem of PBCNs, and optimal state feedback controllers are obtained such that the output of PBCNs tracks a constant as well as a time-varying reference signal. The presented method is model-free and offers scalability, thereby provides an efficient way to control large-scale PBCNs that are a natural choice to model gene regulatory networks (GRNs). Finally, three PBCN models of GRNs including a 16-gene and 28-gene networks are considered to verify the presented results.

Keywords