IDEAS home Printed from https://ideas.repec.org/a/gam/jeners/v18y2025i10p2666-d1661162.html
   My bibliography  Save this article

Networked Multi-Agent Deep Reinforcement Learning Framework for the Provision of Ancillary Services in Hybrid Power Plants

Author

Listed:
  • Muhammad Ikram

    (School of Engineering, Edith Cowan University, Joondalup, Perth, WA 6027, Australia)

  • Daryoush Habibi

    (School of Engineering, Edith Cowan University, Joondalup, Perth, WA 6027, Australia)

  • Asma Aziz

    (School of Engineering, Edith Cowan University, Joondalup, Perth, WA 6027, Australia)

Abstract

Inverter-based resources (IBRs) are becoming more prominent due to the increasing penetration of renewable energy sources that reduce power system inertia, compromising power system stability and grid support services. At present, optimal coordination among generation technologies remains a significant challenge for frequency control services. This paper presents a novel networked multi-agent deep reinforcement learning (N—MADRL) scheme for optimal dispatch and frequency control services. First, we develop a model-free environment consisting of a photovoltaic (PV) plant, a wind plant (WP), and an energy storage system (ESS) plant. The proposed framework uses a combination of multi-agent actor-critic (MAAC) and soft actor-critic (SAC) schemes for optimal dispatch of active power, mitigating frequency deviations, aiding reserve capacity management, and improving energy balancing. Second, frequency stability and optimal dispatch are formulated in the N—MADRL framework using the physical constraints under a dynamic simulation environment. Third, a decentralised coordinated control scheme is implemented in the HPP environment using communication-resilient scenarios to address system vulnerabilities. Finally, the practicality of the N—MADRL approach is demonstrated in a Grid2Op dynamic simulation environment for optimal dispatch, energy reserve management, and frequency control. Results demonstrated on the IEEE 14 bus network show that compared to PPO and DDPG, N—MADRL achieves 42.10% and 61.40% higher efficiency for optimal dispatch, along with improvements of 68.30% and 74.48% in mitigating frequency deviations, respectively. The proposed approach outperforms existing methods under partially, fully, and randomly connected scenarios by effectively handling uncertainties, system intermittency, and communication resiliency.

Suggested Citation

  • Muhammad Ikram & Daryoush Habibi & Asma Aziz, 2025. "Networked Multi-Agent Deep Reinforcement Learning Framework for the Provision of Ancillary Services in Hybrid Power Plants," Energies, MDPI, vol. 18(10), pages 1-34, May.
  • Handle: RePEc:gam:jeners:v:18:y:2025:i:10:p:2666-:d:1661162
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1996-1073/18/10/2666/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1996-1073/18/10/2666/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Ajagekar, Akshay & Decardi-Nelson, Benjamin & You, Fengqi, 2024. "Energy management for demand response in networked greenhouses with multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 355(C).
    2. May, Ross & Huang, Pei, 2023. "A multi-agent reinforcement learning approach for investigating and optimising peer-to-peer prosumer energy markets," Applied Energy, Elsevier, vol. 334(C).
    3. Dong, Lei & Lin, Hao & Qiao, Ji & Zhang, Tao & Zhang, Shiming & Pu, Tianjiao, 2024. "A coordinated active and reactive power optimization approach for multi-microgrids connected to distribution networks with multi-actor-attention-critic deep reinforcement learning," Applied Energy, Elsevier, vol. 373(C).
    4. Lu, Renzhi & Li, Yi-Chang & Li, Yuting & Jiang, Junhui & Ding, Yuemin, 2020. "Multi-agent deep reinforcement learning based demand response for discrete manufacturing systems energy management," Applied Energy, Elsevier, vol. 276(C).
    5. Wu, Haochi & Qiu, Dawei & Zhang, Liyu & Sun, Mingyang, 2024. "Adaptive multi-agent reinforcement learning for flexible resource management in a virtual power plant with dynamic participating multi-energy buildings," Applied Energy, Elsevier, vol. 374(C).
    6. Lv, Chaoxian & Liang, Rui & Zhang, Ge & Zhang, Xiaotong & Jin, Wei, 2023. "Energy accommodation-oriented interaction of active distribution network and central energy station considering soft open points," Energy, Elsevier, vol. 268(C).
    7. Anjaiah, Kanche & Dash, P.K. & Bisoi, Ranjeeta & Dhar, Snehamoy & Mishra, S.P., 2024. "A new approach for active and reactive power management in renewable based hybrid microgrid considering storage devices," Applied Energy, Elsevier, vol. 367(C).
    8. Alisher Askarov & Vladimir Rudnik & Nikolay Ruban & Pavel Radko & Pavel Ilyushin & Aleksey Suvorov, 2024. "Enhanced Virtual Synchronous Generator with Angular Frequency Deviation Feedforward and Energy Recovery Control for Energy Storage System," Mathematics, MDPI, vol. 12(17), pages 1-26, August.
    9. Hua, Min & Zhang, Cetengfei & Zhang, Fanggang & Li, Zhi & Yu, Xiaoli & Xu, Hongming & Zhou, Quan, 2023. "Energy management of multi-mode plug-in hybrid electric vehicle using multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 348(C).
    10. Wu, Jingda & He, Hongwen & Peng, Jiankun & Li, Yuecheng & Li, Zhanjiang, 2018. "Continuous reinforcement learning of energy management with deep Q network for a power split hybrid electric bus," Applied Energy, Elsevier, vol. 222(C), pages 799-811.
    11. Ochoa, Tomás & Gil, Esteban & Angulo, Alejandro & Valle, Carlos, 2022. "Multi-agent deep reinforcement learning for efficient multi-timescale bidding of a hybrid power plant in day-ahead and real-time markets," Applied Energy, Elsevier, vol. 317(C).
    12. Kim, Yong Soon & Park, Gye Hyun & Kim, Seung Wan & Kim, Dam, 2024. "Incentive design for hybrid energy storage system investment to PV owners considering value of grid services," Applied Energy, Elsevier, vol. 373(C).
    13. Xu, Xuesong & Xu, Kai & Zeng, Ziyang & Tang, Jiale & He, Yuanxing & Shi, Guangze & Zhang, Tao, 2024. "Collaborative optimization of multi-energy multi-microgrid system: A hierarchical trust-region multi-agent reinforcement learning approach," Applied Energy, Elsevier, vol. 375(C).
    14. Čović, Nikolina & Pavić, Ivan & Pandžić, Hrvoje, 2024. "Multi-energy balancing services provision from a hybrid power plant: PV, battery, and hydrogen technologies," Applied Energy, Elsevier, vol. 374(C).
    15. Jin, Ruiyang & Zhou, Yuke & Lu, Chao & Song, Jie, 2022. "Deep reinforcement learning-based strategy for charging station participating in demand response," Applied Energy, Elsevier, vol. 328(C).
    16. Tavakol Aghaei, Vahid & Ağababaoğlu, Arda & Bawo, Biram & Naseradinmousavi, Peiman & Yıldırım, Sinan & Yeşilyurt, Serhat & Onat, Ahmet, 2023. "Energy optimization of wind turbines via a neural control policy based on reinforcement learning Markov chain Monte Carlo algorithm," Applied Energy, Elsevier, vol. 341(C).
    17. Li, Yutong & Hou, Jian & Yan, Gangfeng, 2024. "Exploration-enhanced multi-agent reinforcement learning for distributed PV-ESS scheduling with incomplete data," Applied Energy, Elsevier, vol. 359(C).
    18. Xiaohan Fang & Jinkuan Wang & Guanru Song & Yinghua Han & Qiang Zhao & Zhiao Cao, 2019. "Multi-Agent Reinforcement Learning Approach for Residential Microgrid Energy Scheduling," Energies, MDPI, vol. 13(1), pages 1-26, December.
    19. Abid, Md. Shadman & Apon, Hasan Jamil & Hossain, Salman & Ahmed, Ashik & Ahshan, Razzaqul & Lipu, M.S. Hossain, 2024. "A novel multi-objective optimization based multi-agent deep reinforcement learning approach for microgrid resources planning," Applied Energy, Elsevier, vol. 353(PA).
    20. Li, Jiawen & Yu, Tao & Zhang, Xiaoshun, 2022. "Coordinated load frequency control of multi-area integrated energy system using multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 306(PA).
    21. Wang, Yi & Qiu, Dawei & Strbac, Goran, 2022. "Multi-agent deep reinforcement learning for resilience-driven routing and scheduling of mobile energy storage systems," Applied Energy, Elsevier, vol. 310(C).
    22. Xia, Qinqin & Wang, Yu & Zou, Yao & Yan, Ziming & Zhou, Niancheng & Chi, Yuan & Wang, Qianggang, 2024. "Regional-privacy-preserving operation of networked microgrids: Edge-cloud cooperative learning with differentiated policies," Applied Energy, Elsevier, vol. 370(C).
    23. Vázquez-Canteli, José R. & Nagy, Zoltán, 2019. "Reinforcement learning for demand response: A review of algorithms and modeling techniques," Applied Energy, Elsevier, vol. 235(C), pages 1072-1089.
    24. Xiang, Yue & Lu, Yu & Liu, Junyong, 2023. "Deep reinforcement learning based topology-aware voltage regulation of distribution networks with distributed energy storage," Applied Energy, Elsevier, vol. 332(C).
    25. Li, Sichen & Hu, Weihao & Cao, Di & Chen, Zhe & Huang, Qi & Blaabjerg, Frede & Liao, Kaiji, 2023. "Physics-model-free heat-electricity energy management of multiple microgrids based on surrogate model-enabled multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 346(C).
    26. Kofinas, P. & Dounis, A.I. & Vouros, G.A., 2018. "Fuzzy Q-Learning for multi-agent decentralized energy management in microgrids," Applied Energy, Elsevier, vol. 219(C), pages 53-67.
    27. Jendoubi, Imen & Bouffard, François, 2023. "Multi-agent hierarchical reinforcement learning for energy management," Applied Energy, Elsevier, vol. 332(C).
    28. Xie, Jiahan & Ajagekar, Akshay & You, Fengqi, 2023. "Multi-Agent attention-based deep reinforcement learning for demand response in grid-responsive buildings," Applied Energy, Elsevier, vol. 342(C).
    29. Sun, Xiaotian & Xie, Haipeng & Qiu, Dawei & Xiao, Yunpeng & Bie, Zhaohong & Strbac, Goran, 2023. "Decentralized frequency regulation service provision for virtual power plants: A best response potential game approach," Applied Energy, Elsevier, vol. 352(C).
    30. Zhang, Bin & Hu, Weihao & Ghias, Amer M.Y.M. & Xu, Xiao & Chen, Zhe, 2023. "Two-timescale autonomous energy management strategy based on multi-agent deep reinforcement learning approach for residential multicarrier energy system," Applied Energy, Elsevier, vol. 351(C).
    31. Zhu, Dafeng & Yang, Bo & Liu, Yuxiang & Wang, Zhaojian & Ma, Kai & Guan, Xinping, 2022. "Energy management based on multi-agent deep reinforcement learning for a multi-energy industrial park," Applied Energy, Elsevier, vol. 311(C).
    32. Guo, Guodong & Zhang, Mengfan & Gong, Yanfeng & Xu, Qianwen, 2023. "Safe multi-agent deep reinforcement learning for real-time decentralized control of inverter based renewable energy resources considering communication delay," Applied Energy, Elsevier, vol. 349(C).
    33. Klyve, Øyvind Sommer & Grab, Robin & Olkkonen, Ville & Marstein, Erik Stensrud, 2024. "Influence of high-resolution data on accurate curtailment loss estimation and optimal design of hybrid PV–wind power plants," Applied Energy, Elsevier, vol. 372(C).
    34. Shen, Rendong & Zhong, Shengyuan & Wen, Xin & An, Qingsong & Zheng, Ruifan & Li, Yang & Zhao, Jun, 2022. "Multi-agent deep reinforcement learning optimization framework for building energy system with renewable energy," Applied Energy, Elsevier, vol. 312(C).
    35. Zhang, Tengxi & Xin, Li & Wang, Shunjiang & Guo, Ren & Wang, Wentao & Cui, Jia & Wang, Peng, 2024. "A novel approach of energy and reserve scheduling for hybrid power systems: Frequency security constraints," Applied Energy, Elsevier, vol. 361(C).
    36. Si, Ruiqi & Chen, Siyuan & Zhang, Jun & Xu, Jian & Zhang, Luxi, 2024. "A multi-agent reinforcement learning method for distribution system restoration considering dynamic network reconfiguration," Applied Energy, Elsevier, vol. 372(C).
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    2. Harrold, Daniel J.B. & Cao, Jun & Fan, Zhong, 2022. "Renewable energy integration and microgrid energy trading using multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 318(C).
    3. Wu, Haochi & Qiu, Dawei & Zhang, Liyu & Sun, Mingyang, 2024. "Adaptive multi-agent reinforcement learning for flexible resource management in a virtual power plant with dynamic participating multi-energy buildings," Applied Energy, Elsevier, vol. 374(C).
    4. Xie, Jiahan & Ajagekar, Akshay & You, Fengqi, 2023. "Multi-Agent attention-based deep reinforcement learning for demand response in grid-responsive buildings," Applied Energy, Elsevier, vol. 342(C).
    5. Li, Sichen & Hu, Weihao & Cao, Di & Chen, Zhe & Huang, Qi & Blaabjerg, Frede & Liao, Kaiji, 2023. "Physics-model-free heat-electricity energy management of multiple microgrids based on surrogate model-enabled multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 346(C).
    6. Wang, Yi & Qiu, Dawei & Strbac, Goran, 2022. "Multi-agent deep reinforcement learning for resilience-driven routing and scheduling of mobile energy storage systems," Applied Energy, Elsevier, vol. 310(C).
    7. Ajagekar, Akshay & Decardi-Nelson, Benjamin & You, Fengqi, 2024. "Energy management for demand response in networked greenhouses with multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 355(C).
    8. Pinto, Giuseppe & Kathirgamanathan, Anjukan & Mangina, Eleni & Finn, Donal P. & Capozzoli, Alfonso, 2022. "Enhancing energy management in grid-interactive buildings: A comparison among cooperative and coordinated architectures," Applied Energy, Elsevier, vol. 310(C).
    9. Shen, Rendong & Zhong, Shengyuan & Wen, Xin & An, Qingsong & Zheng, Ruifan & Li, Yang & Zhao, Jun, 2022. "Multi-agent deep reinforcement learning optimization framework for building energy system with renewable energy," Applied Energy, Elsevier, vol. 312(C).
    10. Yang, Ting & Xu, Zheming & Ji, Shijie & Liu, Guoliang & Li, Xinhong & Kong, Haibo, 2025. "Cooperative optimal dispatch of multi-microgrids for low carbon economy based on personalized federated reinforcement learning," Applied Energy, Elsevier, vol. 378(PA).
    11. Gao, Yuan & Matsunami, Yuki & Miyata, Shohei & Akashi, Yasunori, 2022. "Multi-agent reinforcement learning dealing with hybrid action spaces: A case study for off-grid oriented renewable building energy system," Applied Energy, Elsevier, vol. 326(C).
    12. Xue, Lin & Zhang, Yao & Wang, Jianxue & Li, Haotian & Li, Fangshi, 2024. "Privacy-preserving multi-level co-regulation of VPPs via hierarchical safe deep reinforcement learning," Applied Energy, Elsevier, vol. 371(C).
    13. Panagiotis Michailidis & Iakovos Michailidis & Elias Kosmatopoulos, 2025. "Reinforcement Learning for Optimizing Renewable Energy Utilization in Buildings: A Review on Applications and Innovations," Energies, MDPI, vol. 18(7), pages 1-40, March.
    14. Zhou, Yanting & Ma, Zhongjing & Shi, Xingyu & Zou, Suli, 2024. "Multi-agent optimal scheduling for integrated energy system considering the global carbon emission constraint," Energy, Elsevier, vol. 288(C).
    15. Li, Yutong & Hou, Jian & Yan, Gangfeng, 2024. "Exploration-enhanced multi-agent reinforcement learning for distributed PV-ESS scheduling with incomplete data," Applied Energy, Elsevier, vol. 359(C).
    16. Superchi, Francesco & Moustakis, Antonis & Pechlivanoglou, George & Bianchini, Alessandro, 2025. "On the importance of degradation modeling for the robust design of hybrid energy systems including renewables and storage," Applied Energy, Elsevier, vol. 377(PD).
    17. Liangcai Zhou & Long Huo & Linlin Liu & Hao Xu & Rui Chen & Xin Chen, 2025. "Optimal Power Flow for High Spatial and Temporal Resolution Power Systems with High Renewable Energy Penetration Using Multi-Agent Deep Reinforcement Learning," Energies, MDPI, vol. 18(7), pages 1-14, April.
    18. Pinto, Giuseppe & Piscitelli, Marco Savino & Vázquez-Canteli, José Ramón & Nagy, Zoltán & Capozzoli, Alfonso, 2021. "Coordinated energy management for a cluster of buildings through deep reinforcement learning," Energy, Elsevier, vol. 229(C).
    19. Harrold, Daniel J.B. & Cao, Jun & Fan, Zhong, 2022. "Data-driven battery operation for energy arbitrage using rainbow deep reinforcement learning," Energy, Elsevier, vol. 238(PC).
    20. He, Wangli & Li, Chengyuan & Cai, Chenhao & Qing, Xiangyun & Du, Wenli, 2024. "Suppressing active power fluctuations at PCC in grid-connection microgrids via multiple BESSs: A collaborative multi-agent reinforcement learning approach," Applied Energy, Elsevier, vol. 373(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jeners:v:18:y:2025:i:10:p:2666-:d:1661162. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.