Nature Communications (Feb 2024)

High-efficiency reinforcement learning with hybrid architecture photonic integrated circuit

  • Xuan-Kun Li,
  • Jian-Xu Ma,
  • Xiang-Yu Li,
  • Jun-Jie Hu,
  • Chuan-Yang Ding,
  • Feng-Kai Han,
  • Xiao-Min Guo,
  • Xi Tan,
  • Xian-Min Jin

DOI
https://doi.org/10.1038/s41467-024-45305-z
Journal volume & issue
Vol. 15, no. 1
pp. 1 – 10

Abstract

Read online

Abstract Reinforcement learning (RL) stands as one of the three fundamental paradigms within machine learning and has made a substantial leap to build general-purpose learning systems. However, using traditional electrical computers to simulate agent-environment interactions in RL models consumes tremendous computing resources, posing a significant challenge to the efficiency of RL. Here, we propose a universal framework that utilizes a photonic integrated circuit (PIC) to simulate the interactions in RL for improving the algorithm efficiency. High parallelism and precision on-chip optical interaction calculations are implemented with the assistance of link calibration in the hybrid architecture PIC. By introducing similarity information into the reward function of the RL model, PIC-RL successfully accomplishes perovskite materials synthesis task within a 3472-dimensional state space, resulting in a notable 56% improvement in efficiency. Our results validate the effectiveness of simulating RL algorithm interactions on the PIC platform, highlighting its potential to boost computing power in large-scale and sophisticated RL tasks.