Opto-Electronic Advances (Apr 2024)

Efficient stochastic parallel gradient descent training for on-chip optical processor

  • Yuanjian Wan,
  • Xudong Liu,
  • Guangze Wu,
  • Min Yang,
  • Guofeng Yan,
  • Yu Zhang,
  • Jian Wang

DOI
https://doi.org/10.29026/oea.2024.230182
Journal volume & issue
Vol. 7, no. 4
pp. 1 – 11

Abstract

Read online

In recent years, space-division multiplexing (SDM) technology, which involves transmitting data information on multiple parallel channels for efficient capacity scaling, has been widely used in fiber and free-space optical communication systems. To enable flexible data management and cope with the mixing between different channels, the integrated reconfigurable optical processor is used for optical switching and mitigating the channel crosstalk. However, efficient online training becomes intricate and challenging, particularly when dealing with a significant number of channels. Here we use the stochastic parallel gradient descent (SPGD) algorithm to configure the integrated optical processor, which has less computation than the traditional gradient descent (GD) algorithm. We design and fabricate a 6×6 on-chip optical processor on silicon platform to implement optical switching and descrambling assisted by the online training with the SPDG algorithm. Moreover, we apply the on-chip processor configured by the SPGD algorithm to optical communications for optical switching and efficiently mitigating the channel crosstalk in SDM systems. In comparison with the traditional GD algorithm, it is found that the SPGD algorithm features better performance especially when the scale of matrix is large, which means it has the potential to optimize large-scale optical matrix computation acceleration chips.

Keywords