IEEE Open Journal of Signal Processing (Jan 2023)

CNNs Avoid the Curse of Dimensionality by Learning on Patches

  • Vamshi C. Madala,
  • Shivkumar Chandrasekaran,
  • Jason Bunk

DOI
https://doi.org/10.1109/OJSP.2023.3270082
Journal volume & issue
Vol. 4
pp. 233 – 241

Abstract

Read online

Despite the success of convolutional neural networks (CNNs) in numerous computer vision tasks and their extraordinary generalization performances, several attempts to predict the generalization errors of CNNs have only been limited to a posteriori analyses thus far. A priori theories explaining the generalization performances of deep neural networks have mostly ignored the convolutionality aspect and do not specify why CNNs are able to seemingly overcome curse of dimensionality on computer vision tasks like image classification where the image dimensions are in thousands. Our work attempts to explain the generalization performance of CNNs on image classification under the hypothesis that CNNs operate on the domain of image patches. Ours is the first work we are aware of to derive an a priori error bound for the generalization error of CNNs and we present both quantitative and qualitative evidences in the support of our theory. Our patch-based theory also offers explanation for why data augmentation techniques like Cutout, CutMix and random cropping are effective in improving the generalization error of CNNs.

Keywords