IDEAS home Printed from https://ideas.repec.org/a/eee/appene/v335y2023ics030626192300123x.html
   My bibliography  Save this article

Secure energy management of multi-energy microgrid: A physical-informed safe reinforcement learning approach

Author

Listed:
  • Wang, Yi
  • Qiu, Dawei
  • Sun, Mingyang
  • Strbac, Goran
  • Gao, Zhiwei

Abstract

The large-scale integration of distributed energy resources into the energy industry enables the fast transition to a decarbonized future but raises some potential challenges of insecure and unreliable operations. Multi-energy Microgrids (MEMGs), as localized small multi-energy systems, can effectively integrate a variety of energy components with multiple energy sectors, which have been recently recognized as a valid solution to improve the operational security and reliability. As a result, a massive amount of research has been conducted to investigate MEMG energy management problems, including both model-based optimization and model-free learning approaches. Compared to optimization approaches, reinforcement learning is being widely deployed in MEMG energy management problems owing to its ability to handle highly dynamic and stochastic processes without knowing any system knowledge. However, it is still difficult for conventional model-free reinforcement learning methods to capture the physical constraints of the MEMG model, which may therefore destroy its secure operation. To address this research challenge, this paper proposes a novel safe reinforcement learning method by learning a dynamic security assessment rule to abstract a physical-informed safety layer on top of the conventional model-free reinforcement learning energy management policy, which can respect all the physical constraints through mathematically solving an action correction formulation. In this setting, the secure energy management of the MEMG can be guaranteed for both training and test procedures. Extensive case studies based on two integrated systems (i.e., a small 6-bus power and 7-node gas network, and a large 33-bus power and 20-node gas network) are carried out to verify the superior performance of the proposed physical-informed reinforcement learning method in achieving a cost-effective MEMG energy management performance while respecting all the physical constraints, compared to conventional reinforcement learning and optimization approaches.

Suggested Citation

  • Wang, Yi & Qiu, Dawei & Sun, Mingyang & Strbac, Goran & Gao, Zhiwei, 2023. "Secure energy management of multi-energy microgrid: A physical-informed safe reinforcement learning approach," Applied Energy, Elsevier, vol. 335(C).
  • Handle: RePEc:eee:appene:v:335:y:2023:i:c:s030626192300123x
    DOI: 10.1016/j.apenergy.2023.120759
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S030626192300123X
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.apenergy.2023.120759?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Chinmoy, Lakshmi & Iniyan, S. & Goic, Ranko, 2019. "Modeling wind power investments, policies and social benefits for deregulated electricity market – A review," Applied Energy, Elsevier, vol. 242(C), pages 364-377.
    2. Qiu, Dawei & Dong, Zihang & Zhang, Xi & Wang, Yi & Strbac, Goran, 2022. "Safe reinforcement learning for real-time automatic control in a smart energy-hub," Applied Energy, Elsevier, vol. 309(C).
    3. Kou, Peng & Liang, Deliang & Wang, Chen & Wu, Zihao & Gao, Lin, 2020. "Safe deep reinforcement learning-based constrained optimal control scheme for active distribution networks," Applied Energy, Elsevier, vol. 264(C).
    4. Wang, Qi & Zhang, Chunyu & Ding, Yi & Xydis, George & Wang, Jianhui & Østergaard, Jacob, 2015. "Review of real-time electricity markets for integrating Distributed Energy Resources and Demand Response," Applied Energy, Elsevier, vol. 138(C), pages 695-706.
    5. Yang, Shiyu & Wan, Man Pun & Chen, Wanyu & Ng, Bing Feng & Dubey, Swapnil, 2020. "Model predictive control with adaptive machine-learning-based model for building energy efficiency and comfort optimization," Applied Energy, Elsevier, vol. 271(C).
    6. Li, Ke & Yang, Fan & Wang, Lupan & Yan, Yi & Wang, Haiyang & Zhang, Chenghui, 2022. "A scenario-based two-stage stochastic optimization approach for multi-energy microgrids," Applied Energy, Elsevier, vol. 322(C).
    7. Ying Ji & Jianhui Wang & Jiacan Xu & Xiaoke Fang & Huaguang Zhang, 2019. "Real-Time Energy Management of a Microgrid Using Deep Reinforcement Learning," Energies, MDPI, vol. 12(12), pages 1-21, June.
    8. Zhao, Liyuan & Yang, Ting & Li, Wei & Zomaya, Albert Y., 2022. "Deep reinforcement learning-based joint load scheduling for household multi-energy system," Applied Energy, Elsevier, vol. 324(C).
    9. Ding, Tao & Lin, Yanling & Bie, Zhaohong & Chen, Chen, 2017. "A resilient microgrid formation strategy for load restoration considering master-slave distributed generators and topology reconfiguration," Applied Energy, Elsevier, vol. 199(C), pages 205-216.
    10. Tobajas, Javier & Garcia-Torres, Felix & Roncero-Sánchez, Pedro & Vázquez, Javier & Bellatreche, Ladjel & Nieto, Emilio, 2022. "Resilience-oriented schedule of microgrids with hybrid energy storage system using model predictive control," Applied Energy, Elsevier, vol. 306(PB).
    11. Ma, Wei & Wang, Wei & Chen, Zhe & Wu, Xuezhi & Hu, Ruonan & Tang, Fen & Zhang, Weige, 2021. "Voltage regulation methods for active distribution networks considering the reactive power optimization of substations," Applied Energy, Elsevier, vol. 284(C).
    12. Gao, Yuanqi & Yu, Nanpeng, 2022. "Model-augmented safe reinforcement learning for Volt-VAR control in power distribution networks," Applied Energy, Elsevier, vol. 313(C).
    13. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    14. Wang, Jianxiao & Zhong, Haiwang & Ma, Ziming & Xia, Qing & Kang, Chongqing, 2017. "Review and prospect of integrated demand response in the multi-energy system," Applied Energy, Elsevier, vol. 202(C), pages 772-782.
    15. Janko, Samantha & Johnson, Nathan G., 2020. "Reputation-based competitive pricing negotiation and power trading for grid-connected microgrid networks," Applied Energy, Elsevier, vol. 277(C).
    16. Quadri, Imran Ahmad & Bhowmick, S. & Joshi, D., 2018. "A comprehensive technique for optimal allocation of distributed energy resources in radial distribution systems," Applied Energy, Elsevier, vol. 211(C), pages 1245-1260.
    17. Moretti, Luca & Martelli, Emanuele & Manzolini, Giampaolo, 2020. "An efficient robust optimization model for the unit commitment and dispatch of multi-energy systems and microgrids," Applied Energy, Elsevier, vol. 261(C).
    18. Guo, Chenyu & Wang, Xin & Zheng, Yihui & Zhang, Feng, 2022. "Real-time optimal energy management of microgrid with uncertainties based on deep reinforcement learning," Energy, Elsevier, vol. 238(PC).
    19. Zia, Muhammad Fahad & Elbouchikhi, Elhoussin & Benbouzid, Mohamed, 2018. "Microgrids energy management systems: A critical review on methods, solutions, and prospects," Applied Energy, Elsevier, vol. 222(C), pages 1033-1055.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Omar A. Beg & Asad Ali Khan & Waqas Ur Rehman & Ali Hassan, 2023. "A Review of AI-Based Cyber-Attack Detection and Mitigation in Microgrids," Energies, MDPI, vol. 16(22), pages 1-23, November.
    2. Paesschesoone, Siebe & Kayedpour, Nezmin & Manna, Carlo & Crevecoeur, Guillaume, 2024. "Reinforcement learning for an enhanced energy flexibility controller incorporating predictive safety filter and adaptive policy updates," Applied Energy, Elsevier, vol. 368(C).
    3. Zhao, Yincheng & Zhang, Guozhou & Hu, Weihao & Huang, Qi & Chen, Zhe & Blaabjerg, Frede, 2023. "Meta-learning based voltage control strategy for emergency faults of active distribution networks," Applied Energy, Elsevier, vol. 349(C).
    4. Wu, Huayi & Xu, Zhao, 2024. "Multi-energy flow calculation in integrated energy system via topological graph attention convolutional network with transfer learning," Energy, Elsevier, vol. 303(C).
    5. Manna, Carlo & Lahariya, Manu & Karami, Farzaneh & Develder, Chris, 2023. "A data-driven optimization framework for industrial demand-side flexibility," Energy, Elsevier, vol. 278(C).
    6. Li, Xiangyu & Luo, Fengji & Li, Chaojie, 2024. "Multi-agent deep reinforcement learning-based autonomous decision-making framework for community virtual power plants," Applied Energy, Elsevier, vol. 360(C).

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Lu, Yu & Xiang, Yue & Huang, Yuan & Yu, Bin & Weng, Liguo & Liu, Junyong, 2023. "Deep reinforcement learning based optimal scheduling of active distribution system considering distributed generation, energy storage and flexible load," Energy, Elsevier, vol. 271(C).
    2. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    3. Pinciroli, Luca & Baraldi, Piero & Compare, Michele & Zio, Enrico, 2023. "Optimal operation and maintenance of energy storage systems in grid-connected microgrids by deep reinforcement learning," Applied Energy, Elsevier, vol. 352(C).
    4. Bio Gassi, Karim & Baysal, Mustafa, 2023. "Improving real-time energy decision-making model with an actor-critic agent in modern microgrids with energy storage devices," Energy, Elsevier, vol. 263(PE).
    5. Antonopoulos, Ioannis & Robu, Valentin & Couraud, Benoit & Kirli, Desen & Norbu, Sonam & Kiprakis, Aristides & Flynn, David & Elizondo-Gonzalez, Sergio & Wattam, Steve, 2020. "Artificial intelligence and machine learning approaches to energy demand-side response: A systematic review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 130(C).
    6. Dimitrios Vamvakas & Panagiotis Michailidis & Christos Korkas & Elias Kosmatopoulos, 2023. "Review and Evaluation of Reinforcement Learning Frameworks on Smart Grid Applications," Energies, MDPI, vol. 16(14), pages 1-38, July.
    7. Zeng, Lanting & Qiu, Dawei & Sun, Mingyang, 2022. "Resilience enhancement of multi-agent reinforcement learning-based demand response against adversarial attacks," Applied Energy, Elsevier, vol. 324(C).
    8. Lu, Renzhi & Li, Yi-Chang & Li, Yuting & Jiang, Junhui & Ding, Yuemin, 2020. "Multi-agent deep reinforcement learning based demand response for discrete manufacturing systems energy management," Applied Energy, Elsevier, vol. 276(C).
    9. Ritu Kandari & Neeraj Neeraj & Alexander Micallef, 2022. "Review on Recent Strategies for Integrating Energy Storage Systems in Microgrids," Energies, MDPI, vol. 16(1), pages 1-24, December.
    10. Soleimanzade, Mohammad Amin & Kumar, Amit & Sadrzadeh, Mohtada, 2022. "Novel data-driven energy management of a hybrid photovoltaic-reverse osmosis desalination system using deep reinforcement learning," Applied Energy, Elsevier, vol. 317(C).
    11. Qiu, Dawei & Wang, Yi & Hua, Weiqi & Strbac, Goran, 2023. "Reinforcement learning for electric vehicle applications in power systems:A critical review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 173(C).
    12. Polimeni, Simone & Moretti, Luca & Martelli, Emanuele & Leva, Sonia & Manzolini, Giampaolo, 2023. "A novel stochastic model for flexible unit commitment of off-grid microgrids," Applied Energy, Elsevier, vol. 331(C).
    13. Sun, Alexander Y., 2020. "Optimal carbon storage reservoir management through deep reinforcement learning," Applied Energy, Elsevier, vol. 278(C).
    14. Yuhong Wang & Lei Chen & Hong Zhou & Xu Zhou & Zongsheng Zheng & Qi Zeng & Li Jiang & Liang Lu, 2021. "Flexible Transmission Network Expansion Planning Based on DQN Algorithm," Energies, MDPI, vol. 14(7), pages 1-21, April.
    15. Yang, Ting & Zhao, Liyuan & Li, Wei & Zomaya, Albert Y., 2021. "Dynamic energy dispatch strategy for integrated energy system based on improved deep reinforcement learning," Energy, Elsevier, vol. 235(C).
    16. Wang, Xuan & Shu, Gequn & Tian, Hua & Wang, Rui & Cai, Jinwen, 2020. "Operation performance comparison of CCHP systems with cascade waste heat recovery systems by simulation and operation optimisation," Energy, Elsevier, vol. 206(C).
    17. Sun, Hongchang & Niu, Yanlei & Li, Chengdong & Zhou, Changgeng & Zhai, Wenwen & Chen, Zhe & Wu, Hao & Niu, Lanqiang, 2022. "Energy consumption optimization of building air conditioning system via combining the parallel temporal convolutional neural network and adaptive opposition-learning chimp algorithm," Energy, Elsevier, vol. 259(C).
    18. Yang, Chao & Yao, Wei & Fang, Jiakun & Ai, Xiaomeng & Chen, Zhe & Wen, Jinyu & He, Haibo, 2019. "Dynamic event-triggered robust secondary frequency control for islanded AC microgrid," Applied Energy, Elsevier, vol. 242(C), pages 821-836.
    19. Yujian Ye & Dawei Qiu & Huiyu Wang & Yi Tang & Goran Strbac, 2021. "Real-Time Autonomous Residential Demand Response Management Based on Twin Delayed Deep Deterministic Policy Gradient Learning," Energies, MDPI, vol. 14(3), pages 1-22, January.
    20. Zhu, Ziqing & Hu, Ze & Chan, Ka Wing & Bu, Siqi & Zhou, Bin & Xia, Shiwei, 2023. "Reinforcement learning in deregulated energy market: A comprehensive review," Applied Energy, Elsevier, vol. 329(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:335:y:2023:i:c:s030626192300123x. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.