Nature Communications (Mar 2019)
Humans can decipher adversarial images
Abstract
Convolutional Neural Networks (CNNs) have reached human-level benchmarks in classifying images, but they can be “fooled” by adversarial examples that elicit bizarre misclassifications from machines. Here, the authors show how humans can anticipate which objects CNNs will see in adversarial images.