Recently, machine learning (ML) has been used extensively for intrusion detection systems (IDS), which proved to be very effective in various environments such as the Cloud and IoT. However, due to their complexity, the decisions that are made by such ML-based IDS are very hard to analyze, understand and interpret. Even though ML-based IDS are very effective, they are becoming less transparent. In this paper, we provide an explanation and analysis for ML-based IDS using the SHapley additive exPlanations (SHAP) explainability technique. We applied SHAP to various ML models such as Decision Trees (DT), Random Forest (RF), Logistic Regression (LR), and Feed Forward Neural Networks (FFNN). Further, we conducted our analysis based on NetFlow data collected from the Cloud and IoT.
Kaushik Roy
PDF Document
Thursday Block I