Vulnerability Analysis Of ML-Based Intrusion Detection Systems Against Evasion Attacks
Main Article Content
Abstract
Intrusion Detection Systems (IDS) are essential component of cyber security countermeasures to protect networks against cyber-attacks. IDS have become more sophisticated and capable of identifying complex attack patterns with the initiation of machine learning (ML) techniques in IDS detection engine. However, adversarial evasion attacks are a significant threat towards Machine Learning. These attacks involve subtly modifying malicious inputs to evade detection while maintaining their malicious intent. This paper presents a comprehensive comparative analysis of the impact of various adversarial evasion attack techniques on different machine learning models used in IDS implementations. We evaluate the robustness of commonly used models. Logistic Regression, Gradient Boosting Classifier, and Multi-layer Perceptron, are evaluated against FGSM and PGD adversarial attacks. We demonstrate the vulnerabilities of each model and discuss the implications of these findings for the design and deployment of robust IDS. The results highlight the necessity for adversarial defense methods to mitigate the risks posed by adversarial evasion attacks to ensure the reliability and security of ML-based IDS in real-world applications.