网络与信息安全学报 (Jun 2023)
Double adversarial attack against license plate recognition system
Abstract
Recent studies have revealed that deep neural networks (DNN) used in artificial intelligence systems are highly vulnerable to adversarial sample-based attacks.To address this issue, a dual adversarial attack method was proposed for license plate recognition (LPR) systems in a DNN-based scenario.It was demonstrated that an adversarial patch added to the pattern location of the license plate can render the target detection subsystem of the LPR system unable to detect the license plate class.Additionally, the natural rust and stains were simulated by adding irregular single-connected area random points to the license plate image, which results in the misrecognition of the license plate number.or the license plate research, different shapes of adversarial patches and different colors of adversarial patches are designed as a way to generate adversarial license plates and migrate them to the physical world.Experimental results show that the designed adversarial samples are undetectable by the human eye and can deceive the license plate recognition system, such as EasyPR.The success rate of the attack in the physical world can reach 99%.The study sheds light on the vulnerability of deep learning and the adversarial attack of LPR, and offers a positive contribution toward improving the robustness of license plate recognition models.