IEEE Access (Jan 2023)

Augment CAPTCHA Security Using Adversarial Examples With Neural Style Transfer

  • Nghia Dinh,
  • Kiet Tran-Trung,
  • Vinh Truong Hoang

DOI
https://doi.org/10.1109/ACCESS.2023.3298442
Journal volume & issue
Vol. 11
pp. 83553 – 83561

Abstract

Read online

To counteract rising bots, many CAPTCHAs (Completely Automated Public Turing tests to tell Computers and Humans Apart) have been developed throughout the years. Automated attacks, however, employing powerful deep learning techniques, have had high success rates over common CAPTCHAs, including image-based and text-based CAPTCHAs. Optimistically, introducing imperceptible noise, Adversarial Examples have lately been shown to particularly impact DNN (Deep Neural Network) networks. The authors improved the CAPTCHA security architecture by increasing the resilience of Adversarial Examples when combined with Neural Style Transfer. The findings demonstrated that the proposed approach considerably improves the security of ordinary CAPTCHAs.

Keywords