Dianxin kexue (Jan 2019)
A novel perturbation attack on SVM by greedy algorithm
Abstract
With the increasing concern of machine learning security issues, an adversarial sample generation method for SVM was proposed. This attack occurred in the testing stage by manipulating with the sample tofool the SVM classification model. Greedy strategy was used to search for salient feature subsets in kernel space and then the perturbation in the kernel space was projected back into the input space to obtain attack samples. This method made the test samples misclassified by less than 7% perturbation. Experiments are carried out on two data sets, and both of them are successful. In the artificial data set, the classification error rate is above 50% under 2% perturbation. In MNIST data set, the classification error rate is close to 100% under 5% perturbation.