Pattern recognition techniques form the heart of most, if not all, incoherent linear shift-invariant systems. When an object is recorded using a camera, the object information is sampled by the point spread function (PSF) of the system, replacing every object point with the PSF in the sensor. The PSF is a sharp Kronecker Delta-like function when the numerical aperture (NA) is large with no aberrations. When the NA is small, and the system has aberrations, the PSF appears blurred. In the case of aberrations, if the PSF is known, then the blurred object image can be deblurred by scanning the PSF over the recorded object intensity pattern and looking for pattern matching conditions through a mathematical process called correlation. Deep learning-based image classification for computer vision applications gained attention in recent years. The classification probability is highly dependent on the quality of images as even a minor blur can significantly alter the image classification results. In this study, a recently developed deblurring method, the Lucy-Richardson-Rosen algorithm (LR2A), was implemented to computationally refocus images recorded in the presence of spatio-spectral aberrations. The performance of LR2A was compared against the parent techniques: Lucy-Richardson algorithm and non-linear reconstruction. LR2A exhibited a superior deblurring capability even in extreme cases of spatio-spectral aberrations. Experimental results of deblurring a picture recorded using high-resolution smartphone cameras are presented. LR2A was implemented to significantly improve the performances of the widely used deep convolutional neural networks for image classification.