IEEE Access (Jan 2024)

SecureLite: An Intelligent Defense Mechanism for Securing CNN Models Against Model Inversion Attack

  • Hanan Hussain,
  • PS Tamizharasan,
  • Gaurang Rajeev Pandit,
  • Alavikunhu Panthakkan,
  • Wathiq Mansoor

DOI
https://doi.org/10.1109/ACCESS.2024.3457846
Journal volume & issue
Vol. 12
pp. 137599 – 137617

Abstract

Read online

The growing use of deep learning models in end-device applications has led to various inference attacks and associated data privacy threats. Recent research also reveals the susceptibility of CNN models to model inversion-inference attacks, including the reversal of private training data. In response, this paper proposes a novel defense strategy called SecureLite, which defends against model inversion attacks through a series of complex data processing and data augmentation steps, followed by a novel adversarial training-aware model obfuscation (ATMO) algorithm. The ATMO involves adversarial training with defensive distillation, coupled with Laplace weight obfuscation. Our results demonstrate that SecureLite effectively mitigates model inversion attacks. This is evidenced by image classification tasks on four different CNN target models, including Simple CNN, AlexNet, VGG-16, and MobileNet-V1, resulting in reductions ranging from 85.69% to 99.36% in the ASR for model inversion attacks tested on multiple datasets, including CIFAR-10, GTSRB, and Caltech-101. The inverted images are also evaluated using various benchmarking performance metrics, including LPIPS, SSIM, and PSNR, to ensure quality degradation. Furthermore, a comparative analysis with three other state-of-the-art defense mechanisms in the literature is conducted, revealing that our proposed model outperforms them, achieving superior results without compromising overall model performance. This method greatly enhances data privacy protection and model security, making a significant contribution to addressing unexpected issues and impacts in current technologies.

Keywords