IDEAS home Printed from https://ideas.repec.org/a/gam/jeners/v18y2025i7p1809-d1627533.html
   My bibliography  Save this article

Optimal Power Flow for High Spatial and Temporal Resolution Power Systems with High Renewable Energy Penetration Using Multi-Agent Deep Reinforcement Learning

Author

Listed:
  • Liangcai Zhou

    (East China Division, State Grid Corporation of China, No. 882, Pudong South Road, Pudong New Area, Shanghai 200002, China)

  • Long Huo

    (Center of Nanomaterials for Renewable Energy, State Key Laboratory of Electrical Insulation and Power Equipment, School of Electrical Engineering, Xi’an Jiaotong University, Xi’an 710049, China)

  • Linlin Liu

    (East China Division, State Grid Corporation of China, No. 882, Pudong South Road, Pudong New Area, Shanghai 200002, China)

  • Hao Xu

    (East China Division, State Grid Corporation of China, No. 882, Pudong South Road, Pudong New Area, Shanghai 200002, China)

  • Rui Chen

    (Center of Nanomaterials for Renewable Energy, State Key Laboratory of Electrical Insulation and Power Equipment, School of Electrical Engineering, Xi’an Jiaotong University, Xi’an 710049, China)

  • Xin Chen

    (Center of Nanomaterials for Renewable Energy, State Key Laboratory of Electrical Insulation and Power Equipment, School of Electrical Engineering, Xi’an Jiaotong University, Xi’an 710049, China)

Abstract

The increasing integration of renewable energy sources (RESs) introduces significant uncertainties in both generation and demand, presenting critical challenges to the convergence, feasibility, and real-time performance of optimal power flow (OPF). To address these challenges, a multi-agent deep reinforcement learning (DRL) model is proposed to solve the OPF while ensuring constraints are satisfied rapidly. A heterogeneous multi-agent proximal policy optimization (H-MAPPO) DRL algorithm is introduced for multi-area power systems. Each agent is responsible for regulating the output of generation units in a specific area, and together, the agents work to achieve the global OPF objective, which reduces the complexity of the DRL model’s training process. Additionally, a graph neural network (GNN) is integrated into the DRL framework to capture spatiotemporal features such as RES fluctuations and power grid topological structures, enhancing input representation and improving the learning efficiency of the DRL model. The proposed DRL model is validated using the RTS-GMLC test system, and its performance is compared to MATPOWER with the interior-point iterative solver. The RTS-GMLC test system is a power system with high spatial–temporal resolution and near-real load profiles and generation curves. Test results demonstrate that the proposed DRL model achieves a 100% convergence and feasibility rate, with an optimal generation cost similar to that provided by MATPOWER. Furthermore, the proposed DRL model significantly accelerates computation, achieving up to 85 times faster processing than MATPOWER.

Suggested Citation

  • Liangcai Zhou & Long Huo & Linlin Liu & Hao Xu & Rui Chen & Xin Chen, 2025. "Optimal Power Flow for High Spatial and Temporal Resolution Power Systems with High Renewable Energy Penetration Using Multi-Agent Deep Reinforcement Learning," Energies, MDPI, vol. 18(7), pages 1-14, April.
  • Handle: RePEc:gam:jeners:v:18:y:2025:i:7:p:1809-:d:1627533
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1996-1073/18/7/1809/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1996-1073/18/7/1809/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Li, Chen & Kies, Alexander & Zhou, Kai & Schlott, Markus & Sayed, Omar El & Bilousova, Mariia & Stöcker, Horst, 2024. "Optimal Power Flow in a highly renewable power system based on attention neural networks," Applied Energy, Elsevier, vol. 359(C).
    2. Yue Chen & Zhizhong Guo & Hongbo Li & Yi Yang & Abebe Tilahun Tadie & Guizhong Wang & Yingwei Hou, 2020. "Probabilistic Optimal Power Flow for Day-Ahead Dispatching of Power Systems with High-Proportion Renewable Power Sources," Sustainability, MDPI, vol. 12(2), pages 1-19, January.
    3. Li, Jiawen & Yu, Tao & Zhang, Xiaoshun, 2022. "Coordinated load frequency control of multi-area integrated energy system using multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 306(PA).
    4. Perera, A.T.D. & Kamalaruban, Parameswaran, 2021. "Applications of reinforcement learning in energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 137(C).
    5. Jendoubi, Imen & Bouffard, François, 2023. "Multi-agent hierarchical reinforcement learning for energy management," Applied Energy, Elsevier, vol. 332(C).
    6. Runlin Zhang & Nuo Xu & Kai Zhang & Lei Wang & Gui Lu, 2023. "A Parametric Physics-Informed Deep Learning Method for Probabilistic Design of Thermal Protection Systems," Energies, MDPI, vol. 16(9), pages 1-20, April.
    7. Gao, Fang & Xu, Zidong & Yin, Linfei, 2024. "Bayesian deep neural networks for spatio-temporal probabilistic optimal power flow with multi-source renewable energy," Applied Energy, Elsevier, vol. 353(PA).
    8. Skolfield, J. Kyle & Escobedo, Adolfo R., 2022. "Operations research in optimal power flow: A guide to recent and emerging methodologies and applications," European Journal of Operational Research, Elsevier, vol. 300(2), pages 387-404.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Harrold, Daniel J.B. & Cao, Jun & Fan, Zhong, 2022. "Renewable energy integration and microgrid energy trading using multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 318(C).
    2. Wu, Haochi & Qiu, Dawei & Zhang, Liyu & Sun, Mingyang, 2024. "Adaptive multi-agent reinforcement learning for flexible resource management in a virtual power plant with dynamic participating multi-energy buildings," Applied Energy, Elsevier, vol. 374(C).
    3. Han, Kunlun & Yang, Kai & Yin, Linfei, 2022. "Lightweight actor-critic generative adversarial networks for real-time smart generation control of microgrids," Applied Energy, Elsevier, vol. 317(C).
    4. Yao, Ganzhou & Luo, Zirong & Lu, Zhongyue & Wang, Mangkuan & Shang, Jianzhong & Guerrerob, Josep M., 2023. "Unlocking the potential of wave energy conversion: A comprehensive evaluation of advanced maximum power point tracking techniques and hybrid strategies for sustainable energy harvesting," Renewable and Sustainable Energy Reviews, Elsevier, vol. 185(C).
    5. Dominik Latoń & Jakub Grela & Andrzej Ożadowicz, 2024. "Applications of Deep Reinforcement Learning for Home Energy Management Systems: A Review," Energies, MDPI, vol. 17(24), pages 1-30, December.
    6. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    7. Lan, Penghang & Chen, She & Li, Qihang & Li, Kelin & Wang, Feng & Zhao, Yaoxun, 2024. "Intelligent hydrogen-ammonia combined energy storage system with deep reinforcement learning," Renewable Energy, Elsevier, vol. 237(PB).
    8. Ahmad, Tanveer & Madonski, Rafal & Zhang, Dongdong & Huang, Chao & Mujeeb, Asad, 2022. "Data-driven probabilistic machine learning in sustainable smart energy/smart energy systems: Key developments, challenges, and future research opportunities in the context of smart grid paradigm," Renewable and Sustainable Energy Reviews, Elsevier, vol. 160(C).
    9. Xinghua Liu & Siwei Qiao & Zhiwei Liu, 2023. "A Survey on Load Frequency Control of Multi-Area Power Systems: Recent Challenges and Strategies," Energies, MDPI, vol. 16(5), pages 1-22, February.
    10. Li, Yanxue & Wang, Zixuan & Xu, Wenya & Gao, Weijun & Xu, Yang & Xiao, Fu, 2023. "Modeling and energy dynamic control for a ZEH via hybrid model-based deep reinforcement learning," Energy, Elsevier, vol. 277(C).
    11. Harrold, Daniel J.B. & Cao, Jun & Fan, Zhong, 2022. "Data-driven battery operation for energy arbitrage using rainbow deep reinforcement learning," Energy, Elsevier, vol. 238(PC).
    12. Wu, Long & Yin, Xunyuan & Pan, Lei & Liu, Jinfeng, 2023. "Distributed economic predictive control of integrated energy systems for enhanced synergy and grid response: A decomposition and cooperation strategy," Applied Energy, Elsevier, vol. 349(C).
    13. Mokhtar Aly & Emad A. Mohamed & Abdullah M. Noman & Emad M. Ahmed & Fayez F. M. El-Sousy & Masayuki Watanabe, 2023. "Optimized Non-Integer Load Frequency Control Scheme for Interconnected Microgrids in Remote Areas with High Renewable Energy and Electric Vehicle Penetrations," Mathematics, MDPI, vol. 11(9), pages 1-31, April.
    14. Zhu, Ziqing & Hu, Ze & Chan, Ka Wing & Bu, Siqi & Zhou, Bin & Xia, Shiwei, 2023. "Reinforcement learning in deregulated energy market: A comprehensive review," Applied Energy, Elsevier, vol. 329(C).
    15. Zhong, Shengyuan & Wang, Xiaoyuan & Zhao, Jun & Li, Wenjia & Li, Hao & Wang, Yongzhen & Deng, Shuai & Zhu, Jiebei, 2021. "Deep reinforcement learning framework for dynamic pricing demand response of regenerative electric heating," Applied Energy, Elsevier, vol. 288(C).
    16. Haltor Mataifa & Senthil Krishnamurthy & Carl Kriger, 2023. "Comparative Analysis of the Particle Swarm Optimization and Primal-Dual Interior-Point Algorithms for Transmission System Volt/VAR Optimization in Rectangular Voltage Coordinates," Mathematics, MDPI, vol. 11(19), pages 1-29, September.
    17. Bhargav Appasani & Amitkumar V. Jha & Deepak Kumar Gupta & Nicu Bizon & Phatiphat Thounthong, 2023. "PSO α : A Fragmented Swarm Optimisation for Improved Load Frequency Control of a Hybrid Power System Using FOPID," Energies, MDPI, vol. 16(5), pages 1-17, February.
    18. Nebiyu Kedir & Phuong H. D. Nguyen & Citlaly Pérez & Pedro Ponce & Aminah Robinson Fayek, 2023. "Systematic Literature Review on Fuzzy Hybrid Methods in Photovoltaic Solar Energy: Opportunities, Challenges, and Guidance for Implementation," Energies, MDPI, vol. 16(9), pages 1-38, April.
    19. Yin, Linfei & Ge, Wei, 2024. "Mobileception-ResNet for transient stability prediction of novel power systems," Energy, Elsevier, vol. 309(C).
    20. Guo, Yuxiang & Qu, Shengli & Wang, Chuang & Xing, Ziwen & Duan, Kaiwen, 2024. "Optimal dynamic thermal management for data center via soft actor-critic algorithm with dynamic control interval and combined-value state space," Applied Energy, Elsevier, vol. 373(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jeners:v:18:y:2025:i:7:p:1809-:d:1627533. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.