Defense against adversarial attacks on deep convolutional neural networks through nonlocal denoising

Published in IAES International Journal of Artificial Intelligence (IJ-AI), 2022

Recommended citation: Sandhya Aneja, Nagender Aneja, Pg Abas, Abdul Naim "Defense against adversarial attacks on deep convolutional neural networks through nonlocal denoising." IAES International Journal of Artificial Intelligence (IJ-AI), 2022. vol. 11( 3) doi: 10.11591/ijai.v11.i3.pp961-968 https://ijai.iaescore.com/index.php/IJAI/article/view/21503

Defense-against-adversarial-attacks-on-deep-convolutional-neural-networks-through-nonlocal-denoising

Defense-against-adversarial-attacks-on-deep-convolutional-neural-networks-through-nonlocal-denoising

(Journal Publication)

Access paper here

Abstract: Despite substantial advances in network architecture performance, the susceptibility of adversarial attacks makes deep learning challenging to implement in safety-critical applications. This paper proposes a data-centric approach to addressing this problem. A nonlocal denoising method with different luminance values has generated adversarial examples from the MNIST and CIFAR-10 data sets. Under perturbation, the method provided absolute accuracy improvements of up to 9.3% in the MNIST data set and 13% in the CIFAR-10 data set. Training using transformed images with higher luminance values increases the robustness of the classifier. We have shown that transfer learning is disadvantageous for adversarial machine learning. The results indicate that simple adversarial examples can improve resilience and make deep learning easier to apply in various applications.

Recommended citation: ‘Sandhya Aneja, Nagender Aneja, Pg Abas, Abdul Naim "Defense against adversarial attacks on deep convolutional neural networks through nonlocal denoising." IAES International Journal of Artificial Intelligence (IJ-AI), 2022. vol. 11( 3) doi: 10.11591/ijai.v11.i3.pp961-968’