Vulnerability Analysis Of ML-Based Intrusion Detection Systems Against Evasion Attacks

Main Article Content

Sushil Buriya
Neelam Sharma

Abstract





Intrusion Detection Systems (IDS) are essential component of cyber security countermeasures to protect networks against cyber-attacks. IDS have become more sophisticated and capable of identifying complex attack patterns with the initiation of machine learning (ML) techniques in IDS detection engine. However, adversarial evasion attacks are a significant threat towards Machine Learning. These attacks involve subtly modifying malicious inputs to evade detection while maintaining their malicious intent. This paper presents a comprehensive comparative analysis of the impact of various adversarial evasion attack techniques on different machine learning models used in IDS implementations. We evaluate the robustness of commonly used models. Logistic Regression, Gradient Boosting Classifier, and Multi-layer Perceptron, are evaluated against FGSM and PGD adversarial attacks. We demonstrate the vulnerabilities of each model and discuss the implications of these findings for the design and deployment of robust IDS. The results highlight the necessity for adversarial defense methods to mitigate the risks posed by adversarial evasion attacks to ensure the reliability and security of ML-based IDS in real-world applications.


 





Downloads

Download data is not yet available.

Article Details

How to Cite
Sushil Buriya, & Neelam Sharma. (2023). Vulnerability Analysis Of ML-Based Intrusion Detection Systems Against Evasion Attacks. Educational Administration: Theory and Practice, 29(4), 1960–1968. https://doi.org/10.53555/kuey.v29i4.6791
Section
Articles
Author Biographies

Sushil Buriya

Research Scholar, Department of Computer Science,  Banasthali Vidyapith

Neelam Sharma

Associate Professor, Department of Computer Science,  Banasthali Vidyapith