IEEE Access (Jan 2024)

On the Security of Learnable Image Encryption for Privacy-Preserving Deep Learning

  • April Pyone Maung Maung,
  • Isao Echizen,
  • Hitoshi Kiya

DOI
https://doi.org/10.1109/ACCESS.2024.3454199
Journal volume & issue
Vol. 12
pp. 126415 – 126425

Abstract

Read online

In this paper, we evaluate the security of learnable image encryption methods proposed for privacy-preserving deep learning. In addition, we also propose a new generative model-based attack based on latent diffusion models. Various learnable encryption methods have been studied to protect the sensitive visual information of plain images, and some of them have been investigated to be robust enough against all existing attacks. However, previous attacks on image encryption focus only on traditional cryptanalytic attacks or reverse translation models, so these attacks cannot recover any visual information if a block-scrambling encryption step, which effectively destroys global information, is applied. Instead of reconstructing identical images as plain ones from encrypted images, generative models such as StyleGAN were previously explored to recover styles that can reveal identifiable information from the encrypted images. However, large-scale off-the-shelf latent diffusion models are not yet considered in this regard. Therefore, in this paper, we utilize Stable Diffusion as a generative model-based attack and evaluate the security of learnable image encryption. Experiments were carried out on various datasets, showing that images reconstructed by Stable Diffusion-based attack have some visual cues similar to plain images.

Keywords