Jisuanji kexue (Jan 2023)

Survey of Membership Inference Attacks for Machine Learning

  • CHEN Depeng, LIU Xiao, CUI Jie, HE Daojing

DOI
https://doi.org/10.11896/jsjkx.220800227
Journal volume & issue
Vol. 50, no. 1
pp. 302 – 317

Abstract

Read online

Artificial intelligence has been integrated into all aspects of people's daily lives with the continuous development of machine learning,especially in the deep learning area.Machine learning models are deployed in various applications,enhancing the intelligence of traditional applications.However,in recent years,research has pointed out that personal data used to train machine learning models is vulnerable to the risk of privacy disclosure.Membership inference attacks(MIAs) are significant attacks against the machine learning model that threatens users' privacy.MIA aims to judge whether user data samples are used to train the target model.When the data is closely related to the individual,such as in medical,financial,and other fields,it directly interferes with the user's private information.This paper first introduces the background knowledge of membership inference attacks.Then,we classify the existing MIAs according to whether the attacker has a shadow model.We also summarize the threats of MIAs in different fields.Also,this paper points out the defense means against MIAs.The existing defense mechanisms are classified and summarized according to the strategies for preventing model overfitting,model-based compression,and disturbance.Finally,this paper analyzes the advantages and disadvantages of the current MIAs and defense mechanisms and proposes possible research directions for future MIAs.

Keywords