Tongxin xuebao (Sep 2021)
Label flipping adversarial attack on graph neural network
Abstract
To expand the adversarial attack types of graph neural networks and fill the relevant research gaps, label flipping attack methods were proposed to evaluate the robustness of graph neural network aimed at label noise.The effectiveness mechanisms of adversarial attacks were summarized as three basic hypotheses, contradictory data hypothesis, parameter discrepancy hypothesis and identically distributed hypothesis.Based on the three hypotheses, label flipping attack models were established.Using the gradient oriented attack methods, it was theoretically proved that attack gradients based on the parameter discrepancy hypothesis were the same as gradients of identically distributed hypothesis, and the equivalence between two attack methods was established.Advantages and disadvantages of proposed models based on different hypotheses were compared and analyzed by experiments.Extensive experimental results verify the effectiveness of the proposed attack models.