论文标题

安全的加固学习,用于紧急负载电源系统

Safe Reinforcement Learning for Emergency LoadShedding of Power Systems

论文作者

Vu, Thanh Long, Mukherjee, Sayak, Yin, Tim, Huang, Renke, Tan, and Jie, Huang, Qiuhua

论文摘要

电力电网的范式转移需要重新审视现有的控制方法,以确保电网的安全性和弹性。特别是,电力系统中的不确定性增加和迅速变化的操作条件揭示了电源系统现有控制方法的速度,适应性或可伸缩性方面的出色问题。另一方面,大量的实时数据的可用性可以更清楚地了解网格中发生的情况。最近,深厚的增强学习(RL)被视为一种有前途的方法,利用大量数据进行快速和自适应网格控制。但是,像大多数现有的机器学习(ML) - 基于控制技术一样,RL控制通常无法保证系统的安全性。在本文中,我们引入了一种新型的方法,用于基于RL的电力系统的安全负载脱落,该方法可以在经历故障后增强电动电网的安全电压恢复。进行了39个公交IEEE基准测试的数值模拟,以证明拟议的安全RL紧急控制的有效性及其对训练中未见故障的适应性能力。

The paradigm shift in the electric power grid necessitates a revisit of existing control methods to ensure the grid's security and resilience. In particular, the increased uncertainties and rapidly changing operational conditions in power systems have revealed outstanding issues in terms of either speed, adaptiveness, or scalability of the existing control methods for power systems. On the other hand, the availability of massive real-time data can provide a clearer picture of what is happening in the grid. Recently, deep reinforcement learning(RL) has been regarded and adopted as a promising approach leveraging massive data for fast and adaptive grid control. However, like most existing machine learning (ML)-basedcontrol techniques, RL control usually cannot guarantee the safety of the systems under control. In this paper, we introduce a novel method for safe RL-based load shedding of power systems that can enhance the safe voltage recovery of the electric power grid after experiencing faults. Numerical simulations on the 39-bus IEEE benchmark is performed to demonstrate the effectiveness of the proposed safe RL emergency control, as well as its adaptive capability to faults not seen in the training.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源