论文标题
护理:通过变异推理的推理可以证明可靠的学习
CARE: Certifiably Robust Learning with Reasoning via Variational Inference
论文作者
论文摘要
尽管深度神经网络(DNN)最近取得了巨大进步,但它们通常容易受到对抗性攻击的影响。已经做出了深入的研究工作,以改善DNN的鲁棒性。但是,大多数经验防御措施可以再次自适应攻击,理论上认证的鲁棒性受到限制,尤其是在大规模数据集上。这种脆弱性DNN的潜在根本原因是,尽管它们表现出强大的表现力,但它们缺乏做出可靠和可靠预测的推理能力。在本文中,我们旨在整合领域知识,以便与推理范式进行强大的学习。特别是,我们通过推理管道(CARE)提出了一个可靠的健壮学习,该学习由学习组成和推理组成部分组成。具体而言,我们使用一组标准DNN作为进行语义预测的学习组件,并利用概率图形模型(例如马尔可夫逻辑网络(MLN))作为推理组件,以实现知识/逻辑推理。但是,众所周知,MLN(推理)的确切推断是#P-Complete,它限制了管道的可扩展性。为此,我们建议根据有效的期望最大化算法通过变异推断近似MLN推断。特别是,我们利用图形卷积网络(GCN)在变异推理过程中编码后验分布,并更新MLN(M-step)中GCN(E-step)的参数(E-step)和知识规则的权重。我们在不同的数据集上进行了广泛的实验,并表明与最先进的基线相比,CARE的认证鲁棒性明显更高。我们还进行了不同的消融研究,以证明护理的经验鲁棒性和不同知识整合的有效性。
Despite great recent advances achieved by deep neural networks (DNNs), they are often vulnerable to adversarial attacks. Intensive research efforts have been made to improve the robustness of DNNs; however, most empirical defenses can be adaptively attacked again, and the theoretically certified robustness is limited, especially on large-scale datasets. One potential root cause of such vulnerabilities for DNNs is that although they have demonstrated powerful expressiveness, they lack the reasoning ability to make robust and reliable predictions. In this paper, we aim to integrate domain knowledge to enable robust learning with the reasoning paradigm. In particular, we propose a certifiably robust learning with reasoning pipeline (CARE), which consists of a learning component and a reasoning component. Concretely, we use a set of standard DNNs to serve as the learning component to make semantic predictions, and we leverage the probabilistic graphical models, such as Markov logic networks (MLN), to serve as the reasoning component to enable knowledge/logic reasoning. However, it is known that the exact inference of MLN (reasoning) is #P-complete, which limits the scalability of the pipeline. To this end, we propose to approximate the MLN inference via variational inference based on an efficient expectation maximization algorithm. In particular, we leverage graph convolutional networks (GCNs) to encode the posterior distribution during variational inference and update the parameters of GCNs (E-step) and the weights of knowledge rules in MLN (M-step) iteratively. We conduct extensive experiments on different datasets and show that CARE achieves significantly higher certified robustness compared with the state-of-the-art baselines. We additionally conducted different ablation studies to demonstrate the empirical robustness of CARE and the effectiveness of different knowledge integration.