Nature Communications (Oct 2024)

120 GOPS Photonic tensor core in thin-film lithium niobate for inference and in situ training

  • Zhongjin Lin,
  • Bhavin J. Shastri,
  • Shangxuan Yu,
  • Jingxiang Song,
  • Yuntao Zhu,
  • Arman Safarnejadian,
  • Wangning Cai,
  • Yanmei Lin,
  • Wei Ke,
  • Mustafa Hammood,
  • Tianye Wang,
  • Mengyue Xu,
  • Zibo Zheng,
  • Mohammed Al-Qadasi,
  • Omid Esmaeeli,
  • Mohamed Rahim,
  • Grzegorz Pakulski,
  • Jens Schmid,
  • Pedro Barrios,
  • Weihong Jiang,
  • Hugh Morison,
  • Matthew Mitchell,
  • Xun Guan,
  • Nicolas A. F. Jaeger,
  • Leslie A. Rusch,
  • Sudip Shekhar,
  • Wei Shi,
  • Siyuan Yu,
  • Xinlun Cai,
  • Lukas Chrostowski

DOI
https://doi.org/10.1038/s41467-024-53261-x
Journal volume & issue
Vol. 15, no. 1
pp. 1 – 10

Abstract

Read online

Abstract Photonics offers a transformative approach to artificial intelligence (AI) and neuromorphic computing by enabling low-latency, high-speed, and energy-efficient computations. However, conventional photonic tensor cores face significant challenges in constructing large-scale photonic neuromorphic networks. Here, we propose a fully integrated photonic tensor core, consisting of only two thin-film lithium niobate (TFLN) modulators, a III-V laser, and a charge-integration photoreceiver. Despite its simple architecture, it is capable of implementing an entire layer of a neural network with a computational speed of 120 GOPS, while also allowing flexible adjustment of the number of inputs (fan-in) and outputs (fan-out). Our tensor core supports rapid in-situ training with a weight update speed of 60 GHz. Furthermore, it successfully classifies (supervised learning) and clusters (unsupervised learning) 112 × 112-pixel images through in-situ training. To enable in-situ training for clustering AI tasks, we offer a solution for performing multiplications between two negative numbers.