论文标题
电力系统的可解释的机器学习:建立对沙普利添加说明的信心
Interpretable Machine Learning for Power Systems: Establishing Confidence in SHapley Additive exPlanations
论文作者
论文摘要
可解释的机器学习(IML)有望消除电源系统中机器学习(ML)算法的重大障碍。这封信首先旨在展示Shapley添加性解释(SHAP)的好处,以了解ML模型的结果,这些模型正在越来越多地使用。其次,我们试图证明Shap的解释能够捕获电源系统的基本物理。为此,我们证明了功率传递分布因子(PTDF)(基于物理的线性灵敏度指数)可以从SHAP值中得出。为此,我们使用9-BUS 3生成器测试网络中的简单DC功率流情况,从经过训练的ML模型中汲取了Shap值的导数,从而从发电机电源注射中学习线路流。在证明Shap值可能与支撑电力系统的物理学有关,我们对Shap可以提供的解释建立了信心。
Interpretable Machine Learning (IML) is expected to remove significant barriers for the application of Machine Learning (ML) algorithms in power systems. This letter first seeks to showcase the benefits of SHapley Additive exPlanations (SHAP) for understanding the outcomes of ML models, which are increasingly being used. Second, we seek to demonstrate that SHAP explanations are able to capture the underlying physics of the power system. To do so, we demonstrate that the Power Transfer Distribution Factors (PTDF) -- a physics-based linear sensitivity index -- can be derived from the SHAP values. To do so, we take the derivatives of SHAP values from a ML model trained to learn line flows from generator power injections, using a simple DC power flow case in the 9-bus 3-generator test network. In demonstrating that SHAP values can be related back to the physics that underpin the power system, we build confidence in the explanations SHAP can offer.