IEEE Access (Jan 2019)

An Attack-Based Evaluation Method for Differentially Private Learning Against Model Inversion Attack

  • Cheolhee Park,
  • Dowon Hong,
  • Changho Seo

DOI
https://doi.org/10.1109/ACCESS.2019.2938759
Journal volume & issue
Vol. 7
pp. 124988 – 124999

Abstract

Read online

As the amount of data and computational power explosively increase, valuable results are being created using machine learning techniques. In particular, models based on deep neural networks have shown remarkable performance in various domains. On the other hand, together with the development of neural network models, privacy concerns have been raised. Recently, as privacy breach attacks on training datasets of neural network models have been proposed, research on privacy-preserving neural networks have been conducted. Among the privacy-preserving approaches, differential privacy provides a strict privacy guarantee, and various differentially private mechanisms have been studied for neural network models. However, it is not clear how appropriate privacy parameters should be chosen, considering the model's performance and the degree of privacy guarantee. In this paper, we study how to set appropriate privacy parameters to preserve differential privacy based on the resistance to privacy breach attacks in neural networks. In particular, we focus on the model inversion attack for neural network models, and study how to apply differential privacy as a countermeasure against this attack while retaining the utility of the model. In order to quantify the resistance to the model inversion attack, we introduce a new attack performance metric, instead of a survey-based approach, by leveraging a deep learning model, and capture the relationship between attack probability and the degree of privacy guarantee.

Keywords