PRX Quantum (Jul 2024)

Provably Trainable Rotationally Equivariant Quantum Machine Learning

  • Maxwell T. West,
  • Jamie Heredge,
  • Martin Sevior,
  • Muhammad Usman

DOI
https://doi.org/10.1103/PRXQuantum.5.030320
Journal volume & issue
Vol. 5, no. 3
p. 030320

Abstract

Read online Read online

Exploiting the power of quantum computation to realize superior machine learning algorithms has been a major research focus of recent years, but the prospects of quantum machine learning (QML) remain dampened by considerable technical challenges. A particularly significant issue is that generic QML models suffer from so-called barren plateaus in their training landscapes—large regions where cost function gradients vanish exponentially in the number of qubits employed, rendering large models effectively untrainable. A leading strategy for combating this effect is to build problem-specific models that take into account the symmetries of their data in order to focus on a smaller, relevant subset of Hilbert space. In this work, we introduce a family of rotationally equivariant QML models built upon the quantum Fourier transform, and leverage recent insights from the Lie-algebraic study of QML models to prove that (a subset of) our models do not exhibit barren plateaus. In addition to our analytical results we numerically test our rotationally equivariant models on a dataset of simulated scanning tunneling microscope images of phosphorus impurities in silicon, where rotational symmetry naturally arises, and find that they dramatically outperform their generic counterparts in practice.