PRX Quantum (Oct 2021)
Entanglement-Induced Barren Plateaus
Abstract
We argue that an excess in entanglement between the visible and hidden units in a quantum neural network can hinder learning. In particular, we show that quantum neural networks that satisfy a volume law in the entanglement entropy will give rise to models that are not suitable for learning with high probability. Using arguments from quantum thermodynamics, we then show that this volume law is typical and that there exists a barren plateau in the optimization landscape due to entanglement. More precisely, we show that for any bounded objective function on the visible layers, the Lipshitz constants of the expectation value of that objective function will scale inversely with the dimension of the hidden subsystem with high probability. We show how this can cause both gradient-descent and gradient-free methods to fail. We note that similar problems can happen with quantum Boltzmann machines, although stronger assumptions on the coupling between the hidden and/or visible subspaces are necessary. We highlight how pretraining such generative models may provide a way to navigate these barren plateaus.