IEEE Access (Jan 2024)

The Optimally Designed Deep Autoencoder-Based Compressive Sensing Framework for 1D and 2D Signals

  • Irfan Ahmed,
  • Lunchakorn Wuttisittikulkij,
  • Aftab Khan,
  • Abid Iqbal

DOI
https://doi.org/10.1109/ACCESS.2024.3472044
Journal volume & issue
Vol. 12
pp. 150520 – 150539

Abstract

Read online

The capacity of Compressive Sensing (CS) to recreate original data from a limited number of samples has led to a surge in attention in recent years. As Deep Neural Networks (DNN) have emerged, the performance level of CS has also increased with the deployment of data-driven Autoencoders in it. But the lack of optimal parameters and hyperparameters, and inconsistent Autoencoder structures for more complex datasets cause inefficient resource utilization and limit the true potential of Autoencoders. In this paper, we propose the optimally designed structures and parameters of deep Autoencoder subnetworks for CS-based sampling and reconstruction of speech and image datasets to align with the desired objectives and available constraints. Furthermore, the impact of optimal types, structures, and hyperparameters of Autoencoders are assessed to match with their deployment for speech and image CS. Apart from the optimized levels of structure and hyperparameters, our work emphasized the usage of other types of Autoencoders – other than the Stacked Autoencoders- which are employed for the sampling and reconstruction of speech and image datasets. As a result, the optimally designed Autoencoders have demonstrated 18% improved performances for speech signals when compared with the baseline model. In this work, the novelty is associated with re-engineering the internal structure and layers of Convolutional layers to provide an optimized level of hyperparameters and layer architecture for image compressive sensing, which better suits a variety of compression ratios to better preserve bandwidth, power, and computational resources. We found the compression ratio of 0.4 to be the optimum value, which exhibited the structural similarity value of 0.85, for Convolutional Autoencoders. The novelty of our work lies in developing AI-driven, data-adaptive models that leverage optimally trained autoencoders for CS, significantly enhancing resource efficiency in terms of storage, hardware complexity, and computational cost.

Keywords