Sensors (Sep 2022)
Recover User’s Private Training Image Data by Gradient in Federated Learning
Abstract
Exchanging gradient is a widely used method in modern multinode machine learning system (e.g., distributed training, Federated Learning). Gradients and weights of model has been presumed to be safe to delivery. However, some studies have shown that gradient inversion technique can reconstruct the input images on the pixel level. In this study, we review the research work of data leakage by gradient inversion technique and categorize existing works into three groups: (i) Bias Attacks, (ii) Optimization-Based Attacks, and (iii) Linear Equation Solver Attacks. According to the characteristics of these algorithms, we propose one privacy attack system, i.e., Single-Sample Reconstruction Attack System (SSRAS). This system can carry out image reconstruction regardless of whether the label can be determined. It can extends gradient inversion attack from a fully connected layer with bias terms to attack a fully connected layer and convolutional neural network with or without bias terms. We also propose Improved R-GAP Alogrithm, which can utlize DLG algorithm to derive ground truth. Furthermore, we introduce Rank Analysis Index (RA-I) to measure the possible of whether the user’s raw image data can be reconstructed. This rank analysis derive virtual constraints Vi from weights. Compared with the most representative attack algorithms, this reconstruction attack system can recover a user’s private training image with high fidelity and attack success rate. Experimental results also show the superiority of the attack system over some other state-of-the-art attack algorithms.
Keywords