IDEAS home Printed from https://ideas.repec.org/a/gam/jeners/v17y2024i12p2821-d1411207.html
   My bibliography  Save this article

Robust Energy Management Policies for Solar Microgrids via Reinforcement Learning

Author

Listed:
  • Gerald Jones

    (Department of Industrial and Systems Engineering, University of Tennessee, Knoxville, TN 37996, USA)

  • Xueping Li

    (Department of Industrial and Systems Engineering, University of Tennessee, Knoxville, TN 37996, USA)

  • Yulin Sun

    (School of Accounting, Southwestern University of Finance and Economics, Chengdu 610074, China)

Abstract

As the integration of renewable energy expands, effective energy system management becomes increasingly crucial. Distributed renewable generation microgrids offer green energy and resilience. Combining them with energy storage and a suitable energy management system (EMS) is essential due to the variability in renewable energy generation. Reinforcement learning (RL)-based EMSs have shown promising results in handling these complexities. However, concerns about policy robustness arise with the growing number of grid intermittent disruptions or disconnections from the main utility. This study investigates the resilience of RL-based EMSs to unforeseen grid disconnections when trained in grid-connected scenarios. Specifically, we evaluate the resilience of policies derived from advantage actor–critic (A2C) and proximal policy optimization (PPO) networks trained in both grid-connected and uncertain grid-connectivity scenarios. Stochastic models, incorporating solar energy and load uncertainties and utilizing real-world data, are employed in the simulation. Our findings indicate that grid-trained PPO and A2C excel in cost coverage, with PPO performing better. However, in isolated or uncertain connectivity scenarios, the demand coverage performance hierarchy shifts. The disruption-trained A2C model achieves the best demand coverage when islanded, whereas the grid-connected A2C network performs best in an uncertain grid connectivity scenario. This study enhances the understanding of the resilience of RL-based solutions using varied training methods and provides an analysis of the EMS policies generated.

Suggested Citation

  • Gerald Jones & Xueping Li & Yulin Sun, 2024. "Robust Energy Management Policies for Solar Microgrids via Reinforcement Learning," Energies, MDPI, vol. 17(12), pages 1-22, June.
  • Handle: RePEc:gam:jeners:v:17:y:2024:i:12:p:2821-:d:1411207
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1996-1073/17/12/2821/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1996-1073/17/12/2821/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Zhang, Bin & Hu, Weihao & Xu, Xiao & Li, Tao & Zhang, Zhenyuan & Chen, Zhe, 2022. "Physical-model-free intelligent energy management for a grid-connected hybrid wind-microturbine-PV-EV energy system via deep reinforcement learning approach," Renewable Energy, Elsevier, vol. 200(C), pages 433-448.
    2. Tabar, Vahid Sohrabi & Abbasi, Vahid, 2019. "Energy management in microgrid with considering high penetration of renewable resources and surplus power generation problem," Energy, Elsevier, vol. 189(C).
    3. Guo, Chenyu & Wang, Xin & Zheng, Yihui & Zhang, Feng, 2022. "Real-time optimal energy management of microgrid with uncertainties based on deep reinforcement learning," Energy, Elsevier, vol. 238(PC).
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Zhou, Dengji & Yan, Siyun & Huang, Dawen & Shao, Tiemin & Xiao, Wang & Hao, Jiarui & Wang, Chen & Yu, Tianqi, 2022. "Modeling and simulation of the hydrogen blended gas-electricity integrated energy system and influence analysis of hydrogen blending modes," Energy, Elsevier, vol. 239(PA).
    2. Lu, Yu & Xiang, Yue & Huang, Yuan & Yu, Bin & Weng, Liguo & Liu, Junyong, 2023. "Deep reinforcement learning based optimal scheduling of active distribution system considering distributed generation, energy storage and flexible load," Energy, Elsevier, vol. 271(C).
    3. Wang, Yi & Qiu, Dawei & Sun, Mingyang & Strbac, Goran & Gao, Zhiwei, 2023. "Secure energy management of multi-energy microgrid: A physical-informed safe reinforcement learning approach," Applied Energy, Elsevier, vol. 335(C).
    4. Ahmed M. Hussien & Jonghoon Kim & Abdulaziz Alkuhayli & Mohammed Alharbi & Hany M. Hasanien & Marcos Tostado-Véliz & Rania A. Turky & Francisco Jurado, 2022. "Adaptive PI Control Strategy for Optimal Microgrid Autonomous Operation," Sustainability, MDPI, vol. 14(22), pages 1-22, November.
    5. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    6. Lan, Penghang & Chen, She & Li, Qihang & Li, Kelin & Wang, Feng & Zhao, Yaoxun, 2024. "Intelligent hydrogen-ammonia combined energy storage system with deep reinforcement learning," Renewable Energy, Elsevier, vol. 237(PB).
    7. Constantino Dário Justo & José Eduardo Tafula & Pedro Moura, 2022. "Planning Sustainable Energy Systems in the Southern African Development Community: A Review of Power Systems Planning Approaches," Energies, MDPI, vol. 15(21), pages 1-28, October.
    8. Lee, Sangyoon & Prabawa, Panggah & Choi, Dae-Hyun, 2025. "Joint peak power and carbon emission shaving in active distribution systems using carbon emission flow-based deep reinforcement learning," Applied Energy, Elsevier, vol. 379(C).
    9. Dong, Xiao-Jian & Shen, Jia-Ni & Ma, Zi-Feng & He, Yi-Jun, 2025. "Stochastic optimization of integrated electric vehicle charging stations under photovoltaic uncertainty and battery power constraints," Energy, Elsevier, vol. 314(C).
    10. Xiong, Kang & Hu, Weihao & Cao, Di & Li, Sichen & Zhang, Guozhou & Liu, Wen & Huang, Qi & Chen, Zhe, 2023. "Coordinated energy management strategy for multi-energy hub with thermo-electrochemical effect based power-to-ammonia: A multi-agent deep reinforcement learning enabled approach," Renewable Energy, Elsevier, vol. 214(C), pages 216-232.
    11. Zhu, Ziqing & Hu, Ze & Chan, Ka Wing & Bu, Siqi & Zhou, Bin & Xia, Shiwei, 2023. "Reinforcement learning in deregulated energy market: A comprehensive review," Applied Energy, Elsevier, vol. 329(C).
    12. Zhang, Yiwen & Lin, Rui & Mei, Zhen & Lyu, Minghao & Jiang, Huaiguang & Xue, Ying & Zhang, Jun & Gao, David Wenzhong, 2024. "Interior-point policy optimization based multi-agent deep reinforcement learning method for secure home energy management under various uncertainties," Applied Energy, Elsevier, vol. 376(PA).
    13. Hong, Yejin & Yoon, Sungmin & Choi, Sebin, 2023. "Operational signature-based symbolic hierarchical clustering for building energy, operation, and efficiency towards carbon neutrality," Energy, Elsevier, vol. 265(C).
    14. Xu, Xuesong & Xu, Kai & Zeng, Ziyang & Tang, Jiale & He, Yuanxing & Shi, Guangze & Zhang, Tao, 2024. "Collaborative optimization of multi-energy multi-microgrid system: A hierarchical trust-region multi-agent reinforcement learning approach," Applied Energy, Elsevier, vol. 375(C).
    15. Pinciroli, Luca & Baraldi, Piero & Compare, Michele & Zio, Enrico, 2023. "Optimal operation and maintenance of energy storage systems in grid-connected microgrids by deep reinforcement learning," Applied Energy, Elsevier, vol. 352(C).
    16. Akhtar Hussain & Hak-Man Kim, 2025. "A Rule-Based Modular Energy Management System for AC/DC Hybrid Microgrids," Sustainability, MDPI, vol. 17(3), pages 1-28, January.
    17. Bio Gassi, Karim & Baysal, Mustafa, 2023. "Improving real-time energy decision-making model with an actor-critic agent in modern microgrids with energy storage devices," Energy, Elsevier, vol. 263(PE).
    18. Lu, Xiaoxing & Li, Kangping & Xu, Hanchen & Wang, Fei & Zhou, Zhenyu & Zhang, Yagang, 2020. "Fundamentals and business model for resource aggregator of demand response in electricity markets," Energy, Elsevier, vol. 204(C).
    19. Zhu, Dafeng & Yang, Bo & Liu, Yuxiang & Wang, Zhaojian & Ma, Kai & Guan, Xinping, 2022. "Energy management based on multi-agent deep reinforcement learning for a multi-energy industrial park," Applied Energy, Elsevier, vol. 311(C).
    20. Wang, Jiawei & Wang, Yi & Qiu, Dawei & Su, Hanguang & Strbac, Goran & Gao, Zhiwei, 2025. "Resilient energy management of a multi-energy building under low-temperature district heating: A deep reinforcement learning approach," Applied Energy, Elsevier, vol. 378(PA).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jeners:v:17:y:2024:i:12:p:2821-:d:1411207. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.