Open Access Journal

ISSN : 2394-2320 (Online)

International Journal of Engineering Research in Computer Science and Engineering (IJERCSE)

Monthly Journal for Computer Science and Engineering

Open Access Journal

International Journal of Engineering Research in Computer Science and Engineering (IJERCSE)

Monthly Journal for Computer Science and Engineering

ISSN : 2394-2320 (Online)

Performance analysis of misclassification by different neural networks for minimized perturbation

Author : Ritesham 1 Hemant Saraswat 2 Yash Vardhan Varshney 3 Kapil Dev Sharma 4

Date of Publication :17th October 2019

Abstract: Rapid success results have been noticed by Neural Networks in various learning problems. These models follow the layered architecture to reach the decisions of classifications with a higher prediction with confidence. In this work, we’ve shown a comparable performance of classification by intentionally designed adversarial perturbation. The differential evolution is used to generate multi pixel to single pixel adversarial perturbation attacks to the colored image dataset of CIFAR-10. We address that the robust image classifiers can be fooled easily with perturbation analysis. The result consists of finding an adversarial perturbation that changed the output of DNN. Further, we’ve analyzed with misclassification drawbacks that possibly breaking a classifier and many more advantages of such vulnerability.

Reference :

    1. A. Krizhevsky, “CIFAR-10 database.” [Online]. Available: https://www.cs.toronto.edu/~kriz/cifar.html.
    2. U. A. C. Attack, N. Yu, and K. Darling, “A LowCost Approach to Crack Python CAPTCHAs Using AI-Based Chosen-Plaintext Attack,” 2019.
    3. Y. Taigman, M. A. Ranzato, T. Aviv, and M. Park, “DeepFace : Closing the Gap to Human-Level Performance in Face Verification,” Computer Vision and Pattern Recognition (CVPR), pp. 1–8, 2014.
    4. M. Barreno, A. D. Joseph, and J. D. Tygar, “Can Machine Learning Be Secure ?,” ACM Symposium on Information, Computer, and Communication Security, no. March, 2006.
    5. M. Barreno, B. Nelson, A. D. Joseph, and J. D. Tygar, “The security of machine learning,” Machine learning, no. November, pp. 121–148, 2010.
    6. A. Fawzi and P. Frossard, “DeepFool : a simple and accurate method to fool deep neural networks,” Computer Vision and Pattern Recognition (CVPR), pp. 2574–2582, 2019
    7. C. Szegedy, I. Goodfellow, J. Bruna, and R. Fergus, “Intriguing properties of neural networks,” ICLR, pp. 1–9, 2013.
    8. O. Fawzi and P. Frossard, “Universal adversarial perturbations,” preprint arxiv, 2016.
    9. I. J. Goodfellow, J. Shlens, and C. Szegedy, “EXPLAINING AND HARNESSING,” ICLR, pp. 1–11, 2015.
    10. P. Civicioglu and E. Besdok, “A conceptual comparison of the Cuckoo-search , particle swarm optimization , differential evolution and artificial bee colony algorithms,” Artificial intelligence review, pp. 315–346, 2013.
    11. P. Sabarinath, N. B. K. Babu, M. R. Thansekhar, and R. Saravanan, “Performance Evaluation of Differential Evolution and Particle Swarm Optimization Algorithms for the Optimal Design of Closed Coil Helical Spring,” International Journal of Innovative Research in Science, Engineering and Technology, vol. 3, no. 3, pp. 1423–1428, 2014.
    12. H. A. Abbass, “The Self – Adaptive Pareto Differential Evolution Algorithm,” congress on evolutionary computation, 2002.
    13. R. K. O. Bayot, “A Survey on Object Classification using Convolutional Neural Networks.”

Recent Article