Explaining and Harnessing Adversarial Examples

less than 1 minute read

Published:

This post covers paper “Explaining and Harnessing Adversarial Examples

  • Adversarial Examples are created by applying small but intentionally worst-case perturbations to real examples. ML Models will mis-classify the adversarial examples with high confidence. The cause of this vulnerability is the linear nature.

This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Us- ing this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset.