论文标题
结合符号和亚符号方法的网络攻击的检测,解释和过滤
Detection, Explanation and Filtering of Cyber Attacks Combining Symbolic and Sub-Symbolic Methods
论文作者
论文摘要
在图形结构数据上的机器学习(ML)最近在网络安全域中的入侵检测中加深了兴趣。由于通过监视工具以及越来越复杂的攻击生成的数据量增加,因此这些ML方法正在获得吸引力。知识图及其相应的学习技术,例如图形神经网络(GNN),其能够使用人类可行的词汇无缝地集成来自多个领域的数据的能力,这在网络安全域中找到了应用。但是,与其他连接主义模型类似,GNN在决策中缺乏透明度。这一点尤其重要,因为网络安全域中往往会有大量的误报警报,因此域专家需要进行分类,需要大量的人。因此,我们正在通过探索结合结合域知识的网络安全领域的符号和亚符号方法来提高可解释的AI(XAI),以增强信任管理。我们通过在工业演示器系统中产生解释来实验这种方法。该提出的方法显示出为各种情况范围的警报提供直观的解释。这些解释不仅提供了对警报的更深入的见解,而且还会导致误报警报减少66%,并在包括忠实度量指标(包括忠实指标)时减少93%。
Machine learning (ML) on graph-structured data has recently received deepened interest in the context of intrusion detection in the cybersecurity domain. Due to the increasing amounts of data generated by monitoring tools as well as more and more sophisticated attacks, these ML methods are gaining traction. Knowledge graphs and their corresponding learning techniques such as Graph Neural Networks (GNNs) with their ability to seamlessly integrate data from multiple domains using human-understandable vocabularies, are finding application in the cybersecurity domain. However, similar to other connectionist models, GNNs are lacking transparency in their decision making. This is especially important as there tend to be a high number of false positive alerts in the cybersecurity domain, such that triage needs to be done by domain experts, requiring a lot of man power. Therefore, we are addressing Explainable AI (XAI) for GNNs to enhance trust management by exploring combining symbolic and sub-symbolic methods in the area of cybersecurity that incorporate domain knowledge. We experimented with this approach by generating explanations in an industrial demonstrator system. The proposed method is shown to produce intuitive explanations for alerts for a diverse range of scenarios. Not only do the explanations provide deeper insights into the alerts, but they also lead to a reduction of false positive alerts by 66% and by 93% when including the fidelity metric.