IDEAS home Printed from https://ideas.repec.org/a/eee/appene/v311y2022ics0306261922001064.html
   My bibliography  Save this article

Energy management based on multi-agent deep reinforcement learning for a multi-energy industrial park

Author

Listed:
  • Zhu, Dafeng
  • Yang, Bo
  • Liu, Yuxiang
  • Wang, Zhaojian
  • Ma, Kai
  • Guan, Xinping

Abstract

Owing to large industrial energy consumption, industrial production has brought a huge burden to the grid in terms of renewable energy access and power supply. Due to the coupling of multiple energy sources and the uncertainty of renewable energy and demand, centralized methods require large calculation and coordination overhead. Thus, this paper proposes a multi-energy management framework achieved by decentralized execution and centralized training for an industrial park. The energy management problem is formulated as a partially-observable Markov decision process, which is intractable by dynamic programming due to the lack of the prior knowledge of the underlying stochastic process. The objective is to minimize long-term energy costs while ensuring the demand of users. To solve this issue and improve the calculation speed, a novel multi-agent deep reinforcement learning algorithm is proposed, which contains the following key points: counterfactual baseline for facilitating contributing agents to learn better policies, soft actor–critic for improving robustness and exploring optimal solutions. A novel reward is designed by Lagrange multiplier method to ensure the capacity constraints of energy storage. In addition, considering that the increase in the number of agents leads to performance degradation due to large observation spaces, an attention mechanism is introduced to enhance the stability of policy and enable agents to focus on important energy-related information, which improves the exploration efficiency of soft actor–critic. Numerical results based on actual data verify the performance of the proposed algorithm with high scalability, indicating that the industrial park can minimize energy costs under different demands.

Suggested Citation

  • Zhu, Dafeng & Yang, Bo & Liu, Yuxiang & Wang, Zhaojian & Ma, Kai & Guan, Xinping, 2022. "Energy management based on multi-agent deep reinforcement learning for a multi-energy industrial park," Applied Energy, Elsevier, vol. 311(C).
  • Handle: RePEc:eee:appene:v:311:y:2022:i:c:s0306261922001064
    DOI: 10.1016/j.apenergy.2022.118636
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0306261922001064
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.apenergy.2022.118636?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Wang, Xiaodi & Liu, Youbo & Zhao, Junbo & Liu, Chang & Liu, Junyong & Yan, Jinyue, 2021. "Surrogate model enabled deep reinforcement learning for hybrid energy community operation," Applied Energy, Elsevier, vol. 289(C).
    2. Kou, Peng & Liang, Deliang & Wang, Chen & Wu, Zihao & Gao, Lin, 2020. "Safe deep reinforcement learning-based constrained optimal control scheme for active distribution networks," Applied Energy, Elsevier, vol. 264(C).
    3. Lu, Renzhi & Li, Yi-Chang & Li, Yuting & Jiang, Junhui & Ding, Yuemin, 2020. "Multi-agent deep reinforcement learning based demand response for discrete manufacturing systems energy management," Applied Energy, Elsevier, vol. 276(C).
    4. Li, Jiawen & Yu, Tao & Yang, Bo, 2021. "A data-driven output voltage control of solid oxide fuel cell using multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 304(C).
    5. Liu, Zhe & Adams, Michelle & Cote, Raymond P. & Geng, Yong & Ren, Jingzheng & Chen, Qinghua & Liu, Weili & Zhu, Xuesong, 2018. "Co-benefits accounting for the implementation of eco-industrial development strategies in the scale of industrial park based on emergy analysis," Renewable and Sustainable Energy Reviews, Elsevier, vol. 81(P1), pages 1522-1529.
    6. Heidari, A. & Mortazavi, S.S. & Bansal, R.C., 2020. "Stochastic effects of ice storage on improvement of an energy hub optimal operation including demand response and renewable energies," Applied Energy, Elsevier, vol. 261(C).
    7. Lu, Xinhui & Liu, Zhaoxi & Ma, Li & Wang, Lingfeng & Zhou, Kaile & Feng, Nanping, 2020. "A robust optimization approach for optimal load dispatch of community energy hub," Applied Energy, Elsevier, vol. 259(C).
    8. Perera, A.T.D. & Kamalaruban, Parameswaran, 2021. "Applications of reinforcement learning in energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 137(C).
    9. Guo, Chenyu & Wang, Xin & Zheng, Yihui & Zhang, Feng, 2022. "Real-time optimal energy management of microgrid with uncertainties based on deep reinforcement learning," Energy, Elsevier, vol. 238(PC).
    10. Li, Yuecheng & He, Hongwen & Khajepour, Amir & Wang, Hong & Peng, Jiankun, 2019. "Energy management for a power-split hybrid electric bus via deep reinforcement learning with terrain information," Applied Energy, Elsevier, vol. 255(C).
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Li, Changzhi & Lin, Wei & Wu, Hangyu & Li, Yang & Zhu, Wenchao & Xie, Changjun & Gooi, Hoay Beng & Zhao, Bo & Zhang, Leiqi, 2023. "Performance degradation decomposition-ensemble prediction of PEMFC using CEEMDAN and dual data-driven model," Renewable Energy, Elsevier, vol. 215(C).
    2. Gao, Yuan & Matsunami, Yuki & Miyata, Shohei & Akashi, Yasunori, 2022. "Multi-agent reinforcement learning dealing with hybrid action spaces: A case study for off-grid oriented renewable building energy system," Applied Energy, Elsevier, vol. 326(C).
    3. Dawei Feng & Wenchao Xu & Xinyu Gao & Yun Yang & Shirui Feng & Xiaohu Yang & Hailong Li, 2023. "Carbon Emission Prediction and the Reduction Pathway in Industrial Parks: A Scenario Analysis Based on the Integration of the LEAP Model with LMDI Decomposition," Energies, MDPI, vol. 16(21), pages 1-15, October.
    4. Xie, Jiahan & Ajagekar, Akshay & You, Fengqi, 2023. "Multi-Agent attention-based deep reinforcement learning for demand response in grid-responsive buildings," Applied Energy, Elsevier, vol. 342(C).
    5. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    6. Zhu, Dafeng & Yang, Bo & Ma, Chengbin & Wang, Zhaojian & Zhu, Shanying & Ma, Kai & Guan, Xinping, 2022. "Stochastic gradient-based fast distributed multi-energy management for an industrial park with temporally-coupled constraints," Applied Energy, Elsevier, vol. 317(C).
    7. Li, Sichen & Hu, Weihao & Cao, Di & Chen, Zhe & Huang, Qi & Blaabjerg, Frede & Liao, Kaiji, 2023. "Physics-model-free heat-electricity energy management of multiple microgrids based on surrogate model-enabled multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 346(C).
    8. Jiaying Wang & Chunguang Lu & Shuai Zhang & Huajiang Yan & Changsen Feng, 2023. "Optimal Energy Management Strategy of Clustered Industry Factories Considering Carbon Trading and Supply Chain Coupling," Energies, MDPI, vol. 16(24), pages 1, December.
    9. Zhang, Bin & Hu, Weihao & Cao, Di & Ghias, Amer M.Y.M. & Chen, Zhe, 2023. "Novel Data-Driven decentralized coordination model for electric vehicle aggregator and energy hub entities in multi-energy system using an improved multi-agent DRL approach," Applied Energy, Elsevier, vol. 339(C).
    10. Xiong, Kang & Hu, Weihao & Cao, Di & Li, Sichen & Zhang, Guozhou & Liu, Wen & Huang, Qi & Chen, Zhe, 2023. "Coordinated energy management strategy for multi-energy hub with thermo-electrochemical effect based power-to-ammonia: A multi-agent deep reinforcement learning enabled approach," Renewable Energy, Elsevier, vol. 214(C), pages 216-232.
    11. Qiu, Dawei & Xue, Juxing & Zhang, Tingqi & Wang, Jianhong & Sun, Mingyang, 2023. "Federated reinforcement learning for smart building joint peer-to-peer energy and carbon allowance trading," Applied Energy, Elsevier, vol. 333(C).

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Zhu, Ziqing & Hu, Ze & Chan, Ka Wing & Bu, Siqi & Zhou, Bin & Xia, Shiwei, 2023. "Reinforcement learning in deregulated energy market: A comprehensive review," Applied Energy, Elsevier, vol. 329(C).
    2. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    3. Homod, Raad Z. & Togun, Hussein & Kadhim Hussein, Ahmed & Noraldeen Al-Mousawi, Fadhel & Yaseen, Zaher Mundher & Al-Kouz, Wael & Abd, Haider J. & Alawi, Omer A. & Goodarzi, Marjan & Hussein, Omar A., 2022. "Dynamics analysis of a novel hybrid deep clustering for unsupervised learning by reinforcement of multi-agent to energy saving in intelligent buildings," Applied Energy, Elsevier, vol. 313(C).
    4. Wang, Yi & Qiu, Dawei & Sun, Mingyang & Strbac, Goran & Gao, Zhiwei, 2023. "Secure energy management of multi-energy microgrid: A physical-informed safe reinforcement learning approach," Applied Energy, Elsevier, vol. 335(C).
    5. Aslani, Mehrdad & Mashayekhi, Mehdi & Hashemi-Dezaki, Hamed & Ketabi, Abbas, 2022. "Robust optimal operation of energy hub incorporating integrated thermal and electrical demand response programs under various electric vehicle charging modes," Applied Energy, Elsevier, vol. 321(C).
    6. Bio Gassi, Karim & Baysal, Mustafa, 2023. "Improving real-time energy decision-making model with an actor-critic agent in modern microgrids with energy storage devices," Energy, Elsevier, vol. 263(PE).
    7. Gan, Wei & Yan, Mingyu & Yao, Wei & Wen, Jinyu, 2021. "Peer to peer transactive energy for multiple energy hub with the penetration of high-level renewable energy," Applied Energy, Elsevier, vol. 295(C).
    8. Yin, Linfei & Li, Yu, 2022. "Hybrid multi-agent emotional deep Q network for generation control of multi-area integrated energy systems," Applied Energy, Elsevier, vol. 324(C).
    9. Lasemi, Mohammad Ali & Arabkoohsar, Ahmad & Hajizadeh, Amin & Mohammadi-ivatloo, Behnam, 2022. "A comprehensive review on optimization challenges of smart energy hubs under uncertainty factors," Renewable and Sustainable Energy Reviews, Elsevier, vol. 160(C).
    10. Ahmadisedigh, Hossein & Gosselin, Louis, 2022. "How can combined heating and cooling networks benefit from thermal energy storage? Minimizing lifetime cost for different scenarios," Energy, Elsevier, vol. 243(C).
    11. Zeng, Lanting & Qiu, Dawei & Sun, Mingyang, 2022. "Resilience enhancement of multi-agent reinforcement learning-based demand response against adversarial attacks," Applied Energy, Elsevier, vol. 324(C).
    12. Shen, Rendong & Zhong, Shengyuan & Wen, Xin & An, Qingsong & Zheng, Ruifan & Li, Yang & Zhao, Jun, 2022. "Multi-agent deep reinforcement learning optimization framework for building energy system with renewable energy," Applied Energy, Elsevier, vol. 312(C).
    13. Karimi, Hamid & Jadid, Shahram, 2023. "Multi-layer energy management of smart integrated-energy microgrid systems considering generation and demand-side flexibility," Applied Energy, Elsevier, vol. 339(C).
    14. Azimi, Maryam & Salami, Abolfazl, 2021. "A new approach on quantification of flexibility index in multi-carrier energy systems towards optimally energy hub management," Energy, Elsevier, vol. 232(C).
    15. Lu, Renzhi & Li, Yi-Chang & Li, Yuting & Jiang, Junhui & Ding, Yuemin, 2020. "Multi-agent deep reinforcement learning based demand response for discrete manufacturing systems energy management," Applied Energy, Elsevier, vol. 276(C).
    16. Lu, Xinhui & Li, Haobin & Zhou, Kaile & Yang, Shanlin, 2023. "Optimal load dispatch of energy hub considering uncertainties of renewable energy and demand response," Energy, Elsevier, vol. 262(PB).
    17. Pazouki, Samaneh & Naderi, Ehsan & Asrari, Arash, 2021. "A remedial action framework against cyberattacks targeting energy hubs integrated with distributed energy resources," Applied Energy, Elsevier, vol. 304(C).
    18. Soleimanzade, Mohammad Amin & Kumar, Amit & Sadrzadeh, Mohtada, 2022. "Novel data-driven energy management of a hybrid photovoltaic-reverse osmosis desalination system using deep reinforcement learning," Applied Energy, Elsevier, vol. 317(C).
    19. Qiu, Dawei & Wang, Yi & Hua, Weiqi & Strbac, Goran, 2023. "Reinforcement learning for electric vehicle applications in power systems:A critical review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 173(C).
    20. Fan, Guangyao & Liu, Zhijian & Liu, Xuan & Shi, Yaxin & Wu, Di & Guo, Jiacheng & Zhang, Shicong & Yang, Xinyan & Zhang, Yulong, 2022. "Two-layer collaborative optimization for a renewable energy system combining electricity storage, hydrogen storage, and heat storage," Energy, Elsevier, vol. 259(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:311:y:2022:i:c:s0306261922001064. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.