Scientific Reports (Nov 2024)
Exploring the uncertainty principle in neural networks through binary classification
Abstract
Abstract Neural networks are reported to be vulnerable under minor and imperceptible attacks. The underlying mechanism and quantitative measure of the vulnerability still remains to be revealed. In this study, we explore the intrinsic trade-off between accuracy and robustness in neural networks, framed through the lens of the “uncertainty principle”. By examining the fundamental limitations imposed by this principle, we reveal how neural networks inherently balance precision in feature extraction with susceptibility to adversarial perturbations. Our analysis highlights that as neural networks achieve higher accuracy, their vulnerability to adversarial attacks increases, a phenomenon rooted in the uncertainty relation. By using the mathematics from quantum mechanics, we offer a theoretical foundation and analytical method for understanding the vulnerabilities of deep learning models.
Keywords