Open Access Journal

ISSN : 2394-2320 (Online)

International Journal of Engineering Research in Computer Science and Engineering (IJERCSE)

Monthly Journal for Computer Science and Engineering

Open Access Journal

International Journal of Engineering Research in Computer Science and Engineering (IJERCSE)

Monthly Journal for Computer Science and Engineering

ISSN : 2394-2320 (Online)

Security Based Pattern Classifiers

Author : Mr. Zaid Alam Khan 1 Mr. MD Azher 2 Mr. Kante Surya 3 Chandra Rao 4 Ms. Neelu l 5

Date of Publication :7th July 2015

Abstract: Security is usually defined as opposing oneself from harmful attacks. Security is a part of everyones life. People wants to be safe and secure all the time but one never knows when his/her system can be attacked by malicious intruders. However upgrading ones security at the highest level possible is a necessary task. Pattern classification systems are commonly used in adversarial applications, such as biometric authentication, network intrusion detection, and spam filtering. It is to be noted that in these three areas data can be purposely manipulated or modified by humans to undermine their operation. These scenarios are not considered by classical design methods. Pattern classification systems may exhibit vulnerabilities, and when exploited may severely affect performance. Extension of pattern classification theory and design methods to real time applications is thus a very relevant research direction which has not yet been pursued in a systematic way and proper way. This paper introduces one of the main open issues: establishing a security system as a real time application which can be used in several organisations such as hospitals, banking system, libraries etc. Reports shows that security evaluation can provide a more complete understanding of the classifiers behaviour and lead to better design choices.

Reference :

    1. R.N. Rodrigues, L.L. Ling, and V. Govindaraju, “Robustness of Multimodal Biometric Fusion Methods against Spoof Attacks,” J. Visual Languages and Computing, vol. 20, no. 3, pp. 169-179, 2009
    2. P. Johnson, B. Tan, and S. Schuckers, “Multimodal Fusion Vulnerability to Non-Zero Effort (Spoof) Imposters,” Proc. IEEE Int’l Workshop Information Forensics and Security, pp. 1-5, 2010.
    3. P. Fogla, M. Sharif, R. Perdisci, O. Kolesnikov, and W. Lee, “Polymorphic Blending Attacks,” Proc. 15th Conf. USENIX Security Symp., 2006.
    4. G.L. Wittel and S.F. Wu, “On Attacking Statistical Spam Filters,” Proc. First Conf. Email and Anti-Spam, 2004.
    5. D. Lowd and C. Meek, “Good Word Attacks on Statistical Spam Filters,” Proc. Second Conf. Email and Anti-Spam, 2005.
    6. A. Kolcz and C.H. Teo, “Feature Weighting for Improved Classifier Robustness,” Proc. Sixth Conf. Email and Anti-Spam, 2009.
    7. D.B. Skillicorn, “Adversarial Knowledge Discovery,” IEEE Intelligent Systems, vol. 24, no. 6, Nov./Dec. 2009.
    8. D. Fetterly, “Adversarial Information Retrieval: The Manipulation of Web Content,” ACM Computing Rev., 2007.
    9. R.O. Duda, P.E. Hart, and D.G. Stork, Pattern Classification. Wiley-Interscience Publication, 2000.
    10. N. Dalvi, P. Domingos, Mausam, S. Sanghai, and D. Verma, “Adversarial Classification,” Proc. 10th ACM SIGKDD Int’l Conf. Knowledge Discovery and Data Mining, pp. 99-108, 2004
    11. M. Barreno, B. Nelson, R. Sears, A.D. Joseph, and J.D. Tygar, “Can Machine Learning be Secure?” Proc. ACM Symp. Information, Computer and Comm. Security (ASIACCS), pp. 16-25, 2006.
    12. A.A. Cardenas and J.S. Baras, “Evaluation of Classifiers: Practical Considerations for Security Applications,” Proc. AAAI Workshop Evaluation Methods for Machine Learning, 2006.
    13. P. Laskov and R. Lippmann, “Machine Learning in Adversarial Environments,” Machine Learning, vol. 81, pp. 115-119, 2010.
    14. L. Huang, A.D. Joseph, B. Nelson, B. Rubinstein, and J.D. Tygar, “Adversarial Machine Learning,” Proc. Fourth ACM Workshop Artificial Intelligence and Security, pp. 43-57, 2011.
    15. M. Barreno, B. Nelson, A. Joseph, and J. Tygar, “The Security of Machine Learning,” Machine Learning, vol. 81, pp. 121-148, 2010.
    16. D. Lowd and C. Meek, “Adversarial Learning,” Proc. 11th ACM SIGKDD Int’l Conf. Knowledge Discovery and Data Mining, pp. 641647, 2005.
    17. P. Laskov and M. Kloft, “A Framework for Quantitative Security Analysis of Machine Learning,” Proc. Second ACM Workshop Security and Artificial Intelligence, pp. 1-4, 2009.
    18. NIPS Workshop Machine Learning in Adversarial Environments for Computer Security, http://mlsnips07.first.fraunhofer.de/, 2007.
    19. Dagstuhl Perspectives Workshop Mach. Learning Methods for Computer Sec.,http://http://www.dagstuhl.de/12371/, 2012.

Recent Article