IEEE Access (Jan 2024)
Compressive Sensing Network Deeply Induced by Visual Mechanism
Abstract
In recent years, deep learning methods have attracted widespread attention in the field of image compressive sensing. However, these methods still face challenges such as high computational complexity and severe loss of reconstruction details. To address these challenges, we propose a Full Visual Mechanism-based Compressive Sensing Network (FVM-CSNet) inspired by the human visual system’s process of perceiving and understanding images. A visual multi-resolution sampling subnetwork is designed to simulate the perceptual characteristics of the human visual system’s frontend, allowing measurements to better preserve visual information from the original image at the sampling stage. At the reconstruction stage, we use the information processing characteristics of the visual system’s backend and construct a lightweight deep reconstruction subnetwork to enhance improve quality of image reconstruction. Specifically, we introduce a discrete wavelet transform-based visual weighting module and an inverse discrete wavelet reconstruction fusion module to adjust the weights and fuse different frequency sub-bands, which not only enhances image reconstruction quality but also significantly reduces computational complexity. To further optimize the model’s efficiency, we employ a stepped replicating strategy in the feature transfer of dense residual blocks to improve feature transfer efficiency. Furthermore, by introducing dilated convolutions with varying dilation rates, we enable multi-scale feature to be learned, which enhance rich feature and expressive power without increasing model complexity. The experimental results show that our FVM-CSNet exhibits significant advantages on the Set14 dataset compared to existing advanced methods (TransCS, OCTUF and DPC-DUN). The average PSNR and average SSIM of FVM-CSNet are improved by 2.42% and 1.68%, 1.54% and 1.27%, and 0.28% and 1.33% compared to the above three advanced methods for four different sampling rates, respectively. Extensive experimental results of other datasets demonstrate the comprehensive superiority of our method compared to existing methods. Moreover, our FVM-CSNet also demonstrates significant advantages in terms of reconstruction speed.
Keywords