Secure and Interpretable Intrusion Detection through Federated and Ensemble Machine Learning with XAI

Authors

  • Sikander Javed Faculty of Computer Science & Information Technology, Superior University, Lahore, Pakistan.
  • Naveed Mukhtar Faculty of Computer Science & Information Technology, Superior University, Lahore, Pakistan.
  • Shahid Iqbal Department of Computer Science, Green International University, Lahore, Pakistan.
  • Syed Asad Ali Naqvi Faculty of Computer Science & Information Technology, Superior University, Lahore, Pakistan.
  • Amna Ishtiaq Department of Computer Science, Green International University, Lahore, Pakistan.
  • Shahan Yamin Siddiqui Department of Computer Science, NASTP Institute of Information Technology, Lahore, Pakistan.
  • Muhammad Ammar Department of Computer Science, Green International University, Lahore, Pakistan.

Keywords:

IDS, Machine Learning, Federated Learning, Ensemble Learning, Shapley Additive Explanations (SHAP), General Data Protection Regulation (GDPR), Intrusions

Abstract

In today’s digital era with the expansion of internet-connected systems, the security of network system is becoming increasingly critical along with the risk of sophisticated cyber-attacks. A system i.e., Intrusion Detection System (IDS) is required that can identify these unauthorized and harmful attacks while protecting network environment. Despite this attribute, ITS raises concerns related to the privacy of data, generalizability, scalability and transparency for machine learning based (ML) systems. Thus, to address these challenges, a novel framework is proposed in this study with ML and explainable artificial intelligence (XAI). Federated learning is a machine learning technique that enhances security and data privacy in network system. FL is integrated in this study along with the ensemble learning in IDS systems. FL ensures data privacy while training models locally at distributed nodes without sharing raw data to meet regulatory requirements. Powerful ensemble algorithm is incorporated to enhance the accuracy in predicting attacks from diverse patterns and types. Moreover, Explainable AI is an advanced tool in AI that provides explanation of predictions, its applications include Shapley Additive explanations (SHAP) incorporated in this study to provide interpretation for the model’s predictions. SHAP highlights the contribution of each individual feature thereby enabling better human understanding and ensuring trust in AI based models. The FL based ensemble learning model is evaluated on NID data set which is widely accepted benchmark dataset to detect intrusions thereby providing validation. Superior performance is achieved in terms of accuracy, precision, recall, FI-score and AUROC scores. A powerful solution is developed to provide security and privacy preservation by combining algorithms i.e., FL, ensemble ML and XAI. Thus, the proposed framework contributes significantly to the advancement of AI in cybersecurity and environments were data sensitivity is crucial.

Downloads

Published

2025-05-27

How to Cite

Sikander Javed, Naveed Mukhtar, Shahid Iqbal, Syed Asad Ali Naqvi, Amna Ishtiaq, Shahan Yamin Siddiqui, & Muhammad Ammar. (2025). Secure and Interpretable Intrusion Detection through Federated and Ensemble Machine Learning with XAI . Journal of Computing & Biomedical Informatics. Retrieved from https://jcbi.org/index.php/Main/article/view/975

Issue

Section

Articles