论文标题

测试机器学习模型的单调性

Testing Monotonicity of Machine Learning Models

论文作者

Sharma, Arnab, Wehrheim, Heike

论文摘要

如今,机器学习(ML)模型越来越多地用于决策。这迫切需要对ML模型(通常依赖域)要求的ML模型保证。单调性就是这样的要求。它将ML算法“学到”的软件指定为通过增加某些属性值的增加进行预测。尽管存在多种用于确保生成模型单调性的ML算法,但在很大程度上缺乏检查单调性的方法,特别是黑盒模型的方法。在这项工作中,我们提出了基于验证的单调性测试,即通过验证技术对白框模型上测试输入的形式计算,以及从测试的黑框模型中自动推断了此近似白盒模型。在白色框模型上,可以通过定向测试用例的计算系统地探索测试输入的空间。在90个黑盒模型上的经验评估显示,基于验证的测试可以优于自适应随机测试以及在有效性和效率方面的基于属性的技术。

Today, machine learning (ML) models are increasingly applied in decision making. This induces an urgent need for quality assurance of ML models with respect to (often domain-dependent) requirements. Monotonicity is one such requirement. It specifies a software as 'learned' by an ML algorithm to give an increasing prediction with the increase of some attribute values. While there exist multiple ML algorithms for ensuring monotonicity of the generated model, approaches for checking monotonicity, in particular of black-box models, are largely lacking. In this work, we propose verification-based testing of monotonicity, i.e., the formal computation of test inputs on a white-box model via verification technology, and the automatic inference of this approximating white-box model from the black-box model under test. On the white-box model, the space of test inputs can be systematically explored by a directed computation of test cases. The empirical evaluation on 90 black-box models shows verification-based testing can outperform adaptive random testing as well as property-based techniques with respect to effectiveness and efficiency.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源