论文标题
使用模糊逻辑和概念嵌入在感知任务中启用深层神经网络的验证
Enabling Verification of Deep Neural Networks in Perception Tasks Using Fuzzy Logic and Concept Embeddings
论文作者
论文摘要
深度卷积神经网络(CNN)用于安全关键应用的主要缺点是它们的黑盒本质。这使得很难验证或监视已经训练有素的计算机视觉CNN的复杂,象征性要求。在这项工作中,我们提出了一种简单但有效的方法,可以验证CNN是否符合与视觉概念相关的象征性谓词逻辑规则。第一个(1)不修改CNN,(2)可以使用无CNN或输出功能的视觉概念,并且(3)可以利用连续的CNN置信输出。为了实现这一目标,我们新结合了可解释的人工智能和逻辑中的方法:首先,使用监督的概念嵌入分析,CNN的输出在事后被概念输出丰富。其次,先验知识的规则被建模为接受CNN输出的真实函数,并且可以在很少的计算开销中进行评估。在这里,我们研究了模糊逻辑的使用,即连续的真实价值和适当的输出校准,这些逻辑在理论上和实际上都显示出轻微的好处。在最新的对象检测器上显示了适用性,用于三个验证用例,其中监视规则违规可以显示检测错误。
One major drawback of deep convolutional neural networks (CNNs) for use in safety critical applications is their black-box nature. This makes it hard to verify or monitor complex, symbolic requirements on already trained computer vision CNNs. In this work, we present a simple, yet effective, approach to verify that a CNN complies with symbolic predicate logic rules which relate visual concepts. It is the first that (1) does not modify the CNN, (2) may use visual concepts that are no CNN in- or output feature, and (3) can leverage continuous CNN confidence outputs. To achieve this, we newly combine methods from explainable artificial intelligence and logic: First, using supervised concept embedding analysis, the output of a CNN is post-hoc enriched by concept outputs. Second, rules from prior knowledge are modelled as truth functions that accept the CNN outputs, and can be evaluated with little computational overhead. We here investigate the use of fuzzy logic, i.e., continuous truth values, and of proper output calibration, which both theoretically and practically show slight benefits. Applicability is demonstrated on state-of-the-art object detectors for three verification use-cases, where monitoring of rule breaches can reveal detection errors.