论文标题

使用基于机器学习算法开发和改善风险模型

Developing and Improving Risk Models using Machine-learning Based Algorithms

论文作者

Wang, Yan, Ni, Xuelei Sherry

论文摘要

这项研究的目的是通过同时探索几种基于机器学习的方法,包括正则化,超参数优化和模型结合算法,开发出一种良好的风险模型来分类业务犯罪。分析下的基本原理首先是为了获得良好的基础二进制分类器(包括逻辑回归($ LR $),K-Nearest邻居($ knn $),决策树($ dt $)和人工神经网络($ ANN $))通过正则化和适当的超参数设置。然后,在良好的基本分类器上执行两个模型结合算法,包括包装和提升,以进一步改进模型。使用精度,接收器操作特征曲线(ROC的AUC),召回和F1得分的精度,通过重复10倍的交叉验证10次评估模型。结果表明,最佳基本分类器以及超参数设置为$ LR $,没有正规化,使用9个最近的邻居,通过将树的最大水平设置为7,而带有三层隐藏层的$ ANN $通过设置为7的最大dt $。 $ k $ $ 9的$ knn $是我们可以获得的最佳模型,因为它分别达到了平均准确性,AUC,召回和F1分别为0.90、0.93、0.82和0.89。

The objective of this study is to develop a good risk model for classifying business delinquency by simultaneously exploring several machine learning based methods including regularization, hyper-parameter optimization, and model ensembling algorithms. The rationale under the analyses is firstly to obtain good base binary classifiers (include Logistic Regression ($LR$), K-Nearest Neighbors ($KNN$), Decision Tree ($DT$), and Artificial Neural Networks ($ANN$)) via regularization and appropriate settings of hyper-parameters. Then two model ensembling algorithms including bagging and boosting are performed on the good base classifiers for further model improvement. The models are evaluated using accuracy, Area Under the Receiver Operating Characteristic Curve (AUC of ROC), recall, and F1 score via repeating 10-fold cross-validation 10 times. The results show the optimal base classifiers along with the hyper-parameter settings are $LR$ without regularization, $KNN$ by using 9 nearest neighbors, $DT$ by setting the maximum level of the tree to be 7, and $ANN$ with three hidden layers. Bagging on $KNN$ with $K$ valued 9 is the optimal model we can get for risk classification as it reaches the average accuracy, AUC, recall, and F1 score valued 0.90, 0.93, 0.82, and 0.89, respectively.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源