Author
Listed:
- Ximing Zhang
(China Southern Power Grid Ltd., Guangzhou 510663, China)
- Xiyuan Ma
(Digital Grid Research Institute, China Southern Power Grid, Guangzhou 510663, China)
- Yun Yu
(China Southern Power Grid Ltd., Guangzhou 510663, China)
- Duotong Yang
(Digital Grid Research Institute, China Southern Power Grid, Guangzhou 510663, China)
- Zhida Lin
(China Southern Power Grid Ltd., Guangzhou 510663, China)
- Changcheng Zhou
(Digital Grid Research Institute, China Southern Power Grid, Guangzhou 510663, China)
- Huan Xu
(China Southern Power Grid Ltd., Guangzhou 510663, China)
- Zhuohuan Li
(Digital Grid Research Institute, China Southern Power Grid, Guangzhou 510663, China)
Abstract
With the rapid development of artificial intelligence technology, DRL has shown great potential in solving complex real-time optimal power flow problems of modern power systems. Nevertheless, traditional DRL methodologies confront dual bottlenecks: (a) suboptimal coordination between exploratory behavior policies and experience-based data exploitation in practical applications, compounded by (b) users’ distrust from the opacity of model decision mechanics. To address these, a model–data hybrid-driven physics-informed reinforcement learning (PIRL) algorithm is proposed in this paper. Specifically, the proposed methodology uses the proximal policy optimization (PPO) algorithm as the agent’s foundational framework and constructs a PI-actor network embedded with prior model knowledge derived from power flow sensitivity into the agent’s actor network via the PINN method, which achieves dual optimization objectives: (a) enhanced environmental perceptibility to improve experience utilization efficiency via gradient-awareness from model knowledge during actor network updates, and (b) improved user trustworthiness through mathematically constrained action gradient information derived from explicit model knowledge, ensuring actor updates adhere to safety boundaries. The simulation and validation results show that the PIRL algorithm outperforms the baseline PPO algorithm in terms of training stability, exploration efficiency, economy, and security.
Suggested Citation
Ximing Zhang & Xiyuan Ma & Yun Yu & Duotong Yang & Zhida Lin & Changcheng Zhou & Huan Xu & Zhuohuan Li, 2025.
"Model-Data Hybrid-Driven Real-Time Optimal Power Flow: A Physics-Informed Reinforcement Learning Approach,"
Energies, MDPI, vol. 18(13), pages 1-20, July.
Handle:
RePEc:gam:jeners:v:18:y:2025:i:13:p:3483-:d:1692729
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jeners:v:18:y:2025:i:13:p:3483-:d:1692729. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.