IEEE Access (Jan 2021)

<italic>p</italic>-Power Exponential Mechanisms for Differentially Private Machine Learning

  • Yanan Li,
  • Xuebin Ren,
  • Fangyuan Zhao,
  • Shusen Yang

DOI
https://doi.org/10.1109/ACCESS.2021.3129130
Journal volume & issue
Vol. 9
pp. 155018 – 155034

Abstract

Read online

Differentially private stochastic gradient descent (DP-SGD) that perturbs the clipped gradients is a popular approach for private machine learning. Gaussian mechanism GM, combined with the moments accountant (MA), has demonstrated a much better privacy-utility tradeoff than using the advanced composition theorem. However, it is unclear whether the tradeoff can be further improved by other mechanisms with different noise distributions. To this end, we extend GM ( $p=2$ ) to the generalized $p$ -power exponential mechanism ( $p$ EM with $p>0$ ) family and show its privacy guarantee. Straightforwardly, we can enhance the privacy-utility tradeoff of GM by searching noise distribution in the wider mechanism space. To implement $p$ EM in practice, we design an effective sampling method and extend MA to $p$ EM for tightly estimating privacy loss. Besides, we formally prove the non-optimality of GM based on the variation method. Numerical experiments validate the properties of $p$ EM and illustrate a comprehensive comparison between $p$ EM and the other two state-of-the-art methods. Experimental results show that $p$ EM is preferred when the noise variance is relatively small to the signal and the dimension is not too high.

Keywords