Quantum Reports (Sep 2022)

Model-Free Deep Recurrent Q-Network Reinforcement Learning for Quantum Circuit Architectures Design

  • Tomah Sogabe,
  • Tomoaki Kimura,
  • Chih-Chieh Chen,
  • Kodai Shiba,
  • Nobuhiro Kasahara,
  • Masaru Sogabe,
  • Katsuyoshi Sakamoto

DOI
https://doi.org/10.3390/quantum4040027
Journal volume & issue
Vol. 4, no. 4
pp. 380 – 389

Abstract

Read online

Artificial intelligence (AI) technology leads to new insights into the manipulation of quantum systems in the Noisy Intermediate-Scale Quantum (NISQ) era. Classical agent-based artificial intelligence algorithms provide a framework for the design or control of quantum systems. Traditional reinforcement learning methods are designed for the Markov Decision Process (MDP) and, hence, have difficulty in dealing with partially observable or quantum observable decision processes. Due to the difficulty of building or inferring a model of a specified quantum system, a model-free-based control approach is more practical and feasible than its counterpart of a model-based approach. In this work, we apply a model-free deep recurrent Q-network (DRQN) reinforcement learning method for qubit-based quantum circuit architecture design problems. This paper is the first attempt to solve the quantum circuit design problem from the recurrent reinforcement learning algorithm, while using discrete policy. Simulation results suggest that our long short-term memory (LSTM)-based DRQN method is able to learn quantum circuits for entangled Bell–Greenberger–Horne–Zeilinger (Bell–GHZ) states. However, since we also observe unstable learning curves in experiments, suggesting that the DRQN could be a promising method for AI-based quantum circuit design application, more investigation on the stability issue would be required.

Keywords