IEEE Access (Jan 2019)

Differential Privacy Preservation in Deep Learning: Challenges, Opportunities and Solutions

  • Jingwen Zhao,
  • Yunfang Chen,
  • Wei Zhang

DOI
https://doi.org/10.1109/ACCESS.2019.2909559
Journal volume & issue
Vol. 7
pp. 48901 – 48911

Abstract

Read online

Nowadays, deep learning has been increasingly applied in real-world scenarios involving the collection and analysis of sensitive data, which often causes privacy leakage. Differential privacy is widely recognized in the majority of traditional scenarios for its rigorous mathematical guarantee. However, it is uncertain to work effectively in the deep learning model. In this paper, we introduce the privacy attacks facing the deep learning model and present them from three aspects: membership inference, training data extraction, and model extracting. Then we recall some basic theory about differential privacy and its extended concepts in deep learning scenarios. Second, in order to analyze the existing works that combine differential privacy and deep learning, we classify them by the layers differential privacy mechanism deployed, such as input layer, hidden layer, and output layer, and discuss their advantages and disadvantages. Finally, we point out several key issues to be solved and provide a broader outlook of this research direction.

Keywords