网络与信息安全学报 (Aug 2020)

Improve the robustness of algorithm under adversarial environment by moving target defense

  • HE Kang, ZHU Yuefei, LIU Long, LU Bin, LIU Bin

DOI
https://doi.org/10.11959/j.issn.2096-109x.2020052
Journal volume & issue
Vol. 6, no. 4
pp. 67 – 76

Abstract

Read online

Traditional machine learning models works in peace environment, assuming that training data and test data share the same distribution. However, the hypothesis does not hold in areas like malicious document detection. The enemy attacks the classification algorithm by modifying the test samples so that the well-constructed malicious samples can escape the detection by machine learning models. To improve the security of machine learning algorithms, moving target defense (MTD) based method was proposed to enhance the robustness. Experimental results show that the proposed method could effectively resist the evasion attack to detection algorithm by dynamic transformation in the stages of algorithm model, feature selection and result output.

Keywords