论文标题
通过压缩树的解释模型
Explainable Models via Compression of Tree Ensembles
论文作者
论文摘要
事实证明,关系决策树的合奏模型(装袋和梯度提升)已被证明是概率逻辑模型(PLM)领域中最有效的学习方法之一。尽管有效,但他们失去了PLM的最重要方面之一 - 可解释性。在本文中,我们考虑将大量博学的树木压缩成单个可解释的模型的问题。为此,我们提出了COTE(树的压缩),该Cote将单个小型决策列表作为压缩表示形式。 Cote首先将树木转换为决策清单,然后借助原始训练集执行组合和压缩。实验评估证明了COTE在几个基准关系数据集中的有效性。
Ensemble models (bagging and gradient-boosting) of relational decision trees have proved to be one of the most effective learning methods in the area of probabilistic logic models (PLMs). While effective, they lose one of the most important aspect of PLMs -- interpretability. In this paper we consider the problem of compressing a large set of learned trees into a single explainable model. To this effect, we propose CoTE -- Compression of Tree Ensembles -- that produces a single small decision list as a compressed representation. CoTE first converts the trees to decision lists and then performs the combination and compression with the aid of the original training set. An experimental evaluation demonstrates the effectiveness of CoTE in several benchmark relational data sets.