Natural Adversarial Examples

1 minute read

Published:

This post covers paper Natural Adversarial Examples

IMAGENET-A and IMAGENET-O

  • IMAGENET-A
    • adversarially filtered images that fool current ImageNet classifiers.
      • removed images that classified correctly by ResNet-50
      • manually selected visually clear images.
      • examples which fool ResNet-50 reliably transfer to other unseen models.
  • IMAGENET-O
    • dataset of adversarially filtered examples for ImageNet out-of-distribution detectors.
      • deleted examples of ImageNet-1K in ImageNet-22K
      • kept examples that are classified by a ResNet-50 as an ImageNet-1K class with high confidence.
      • manually selected visually clear images.
  • Producing a dataset without multilabel images
  • IMAGENET-A Class Restrictions. We select a 200-class subset of ImageNet-1K’s 1, 000 classes so that errors among these 200 classes would be considered egregious [10]. For instance, wrongly classifying Norwich terriers as Norfolk terriers does less to demonstrate faults in current classifiers than mistaking a Persian cat for a candle. We additionally avoid rare classes such as “snow leopard,” classes that have changed much since 2012 such as “iPod,” coarse classes such as “spiral,” classes that are often image backdrops such as “valley,” and finally classes that tend to overlap such as “honeycomb,” “bee,” “bee house,” and “bee eater”; “eraser,”