Nature Communications (Mar 2019)

Humans can decipher adversarial images

  • Zhenglong Zhou,
  • Chaz Firestone

DOI
https://doi.org/10.1038/s41467-019-08931-6
Journal volume & issue
Vol. 10, no. 1
pp. 1 – 9

Abstract

Read online

Convolutional Neural Networks (CNNs) have reached human-level benchmarks in classifying images, but they can be “fooled” by adversarial examples that elicit bizarre misclassifications from machines. Here, the authors show how humans can anticipate which objects CNNs will see in adversarial images.