Department of Physics, Southern University of Science and Technology, Shenzhen 518055, China; Department of Physics, City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong SAR, China; Corresponding authors.
Man-Hong Yung
Department of Physics, Southern University of Science and Technology, Shenzhen 518055, China; Shenzhen Institute for Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China; Guangdong Provincial Key Laboratory of Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China; Shenzhen Key Laboratory of Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen, 518055, China; Corresponding authors.
Optical neural network (ONNs) are emerging as attractive proposals for machine-learning applications. However, the stability of ONNs decreases with the circuit depth, limiting the scalability of ONNs for practical uses. Here we demonstrate how to compress the circuit depth to scale only logarithmically in terms of the dimension of the data, leading to an exponential gain in terms of noise robustness. Our low-depth (LD)-ONN is based on an architecture, called Optical CompuTing Of dot-Product UnitS (OCTOPUS), which can also be applied individually as a linear perceptron for solving classification problems. We present both numerical and theoretical evidence showing that LD-ONN can exhibit a significant improvement on robustness, compared with previous ONN proposals based on singular-value decomposition.