网络与信息安全学报 (Aug 2023)

Privacy leakage risk assessment for reversible neural network

  • Yifan HE, Jie ZHANG, Weiming ZHANG, Nenghai YU

DOI
https://doi.org/10.11959/j.issn.2096-109x.2023051
Journal volume & issue
Vol. 9, no. 4
pp. 29 – 39

Abstract

Read online

In recent years, deep learning has emerged as a crucial technology in various fields.However, the training process of deep learning models often requires a substantial amount of data, which may contain private and sensitive information such as personal identities and financial or medical details.Consequently, research on the privacy risk associated with artificial intelligence models has garnered significant attention in academia.However, privacy research in deep learning models has mainly focused on traditional neural networks, with limited exploration of emerging networks like reversible networks.Reversible neural networks have a distinct structure where the upper information input can be directly obtained from the lower output.Intuitively, this structure retains more information about the training data, potentially resulting in a higher risk of privacy leakage compared to traditional networks.Therefore, the privacy of reversible networks was discussed from two aspects: data privacy leakage and model function privacy leakage.The risk assessment strategy was applied to reversible networks.Two classical reversible networks were selected, namely RevNet and i-RevNet.And four attack methods were used accordingly, namely membership inference attack, model inversion attack, attribute inference attack, and model extraction attack, to analyze privacy leakage.The experimental results demonstrate that reversible networks exhibit more serious privacy risks than traditional neural networks when subjected to membership inference attacks, model inversion attacks, and attribute inference attacks.And reversible networks have similar privacy risks to traditional neural networks when subjected to model extraction attack.Considering the increasing popularity of reversible neural networks in various tasks, including those involving sensitive data, it becomes imperative to address these privacy risks.Based on the analysis of the experimental results, potential solutions were proposed which can be applied to the development of reversible networks in the future.

Keywords