A Framework for Spam Filtering Security Evaluation
Abstract
The Pattern classification system is an annex of the Machine learning on which the focal point is the recognition of patterns in the data. In case adversarial applications use, for example Spam Filtering, the Network Intrusion Detection System (NIDS), Biometric Authentication, the pattern classification systems are used. Spam filtering is and adversary application in which data can be employed by humans to attenuate perspective operations. To appraise the security issue related Spam Filtering voluminous machine learning systems. We presented a framework for the experimental evaluation of the classifier security in an adversarial environments, that combines and constructs on the arms race and security by design, Adversary modelling and Data distribution under attack. Furthermore, we presented a MILR classifier with SVM and LR classifier for classification to categorize among legitimate and spam emails on the basis of their textual content.
Keywords
Full Text:
PDFReferences
P. Johnson, B. Tan, and S. Schuckers, Multimodal Fusion Vulnerability to Non-Zero Effort (Spoof) Imposters, Proc. IEEE Intl Workshop Information Forensics and Security, pp. 1-5, 2010.
A. A. Cardenas, J.S. Baras, and K. Seamon, A Framework for the Evaluation of Intrusion Detection Systems, Proc. IEEE Symp. Security and Privacy, pp. 63-77, 2006.
Y. Song, Z. Zhuang, W. C. Lee, H. Li, C.L. Giles and J. Li Q. Zhao, Real-Time Automatic Tag Recommendation, Proc. 31st Ann. Intl ACM SIGIR Conf. Research and Development in Information Retrieval (SIGIR 08), pp. 515-522, 2008.
P. Fogla, M. Sharif, R. Perdisci, O. Kolesnikov, and W. Lee, Polymorphic Blending Attacks, Proc. 15th Conf. USENIX Security Symp., 2006.
A. Kolcz and C.H. Teo, Feature Weighting for Improved Classifier Robustness, Proc. Sixth Conf. Email and Anti- Spam, 2009.
D.B. Skillicorn, Adversarial Knowledge Discovery, IEEE Intelligent Systems, vol. 24, no. 6, Nov./Dec. 2009.
P. Laskov and R. Lippmann, Machine Learning in Adversarial Environments, Machine Learning, vol. 81, pp. 115- 119, 2010.
M. Barreno, B. Nelson, A. Joseph, and J. Tygar, The Security of Machine Learning, Machine Learning, vol. 81, pp. 121- 148, 2010.
D. Lowd and C. Meek, Adversarial Learning, Proc. 11th ACM SIGKDD Intl Conf. Knowledge Discovery and Data Mining, pp. 641- 647, 2005.
P. Laskov and M. Kloft, A Framework for Quantitative Security Analysis of Machine Learning, Proc. Second ACM Workshop Security and Artificial Intelligence, pp. 1-4, 2009.
NIPS Workshop Machine Learning in Adversarial Environments for Computer Security, http://mlsnips07.first.fraunhofer.de/, 2007.
A. Globerson and S.T. Roweis, Nightmare at Test Time: Robust Learning by Feature Deletion, Proc. 23rd Intl Conf. Machine Learning, pp. 353-360, 2006.
S.P. Chung and A.K. Mok, Advanced Allergy attacks: Does a Corpus Really Help, Proc. 10th Intl Conf. Recent Advances in Intrusion Detection (RAID 07), pp. 236-255, 2007.
B. Biggio, G. Fumera, and F. Roli, Security Evaluation of Pattern Classifiers under Attack, IEEE Transactions On knowledge and Data engg., vol. 26, No. 4, April 2014.
D. Lowd and C. Meek, Good Word Attacks on Statistical Spam Filters, Proc. Second Conf. Email and Anti-Spam, 2005.
R.O. Duda, P.E. Hart, and D.G. Stork, Pattern Classification, Wiley-Interscience Publication, 2000.
M. Barreno, B. Nelson, R. Sears, A.D. Joseph, and J.D. Tygar, Can Machine Learning be Secure? Proc. ACM Symp. Information, Computer and Comm. Security (ASIACCS), pp. 16-25, 2006.
L. Huang, A.D. Joseph, B. Nelson, B. Rubinstein, and J.D. Tygar, Adversarial Machine Learning, Proc. Fourth ACM Workshop Artificial Intelligence and Security, pp. 43-57, 2011.
Kunjali Pawar and Madhuri Patil, “Spam Filtering Security Evaluation using MILR Classifier,” Coimbatore Institute of Information Technology (Ciit) International Journal of Automation and autonomous System, Volume 8,No. 3, March 2016 (ISSN: 0974-9543).
Refbacks
- There are currently no refbacks.
This work is licensed under a Creative Commons Attribution 3.0 License.