IDEAS home Printed from https://ideas.repec.org/a/eee/appene/v312y2022ics0306261922001829.html
   My bibliography  Save this article

Multi-agent deep reinforcement learning optimization framework for building energy system with renewable energy

Author

Listed:
  • Shen, Rendong
  • Zhong, Shengyuan
  • Wen, Xin
  • An, Qingsong
  • Zheng, Ruifan
  • Li, Yang
  • Zhao, Jun

Abstract

Under the background of high global building energy consumption, meeting the ever-growing energy consumption demand of building energy system (BES) through renewable energy is one of the effective ways to promote the clean transformation of global energy structure and achieve “carbon neutrality”. However, with the introduction of renewable energy, BES control becomes more complicated. The mismatch between supply and demand sides limits the further growth of renewable energy consumption, which is caused by fluctuation of renewable energy and randomness of load. Therefore, it is challenging to develop an efficient framework to realize the cooperative control of various controlled objects in supply and demand sides. To address this challenge, a multi-agent deep reinforcement learning framework was proposed to optimize the energy management of the building. In this paper, a dueling double deep Q-network was used for optimization of single agent, and value-decomposition network was put forward to solve the cooperation optimization of multiple agents. Also, considering the controlled characteristics of BES, prioritized experience replay and feasible action screening mechanism were introduced to accelerate the convergence and maintain stability of the algorithm applied to BES. Simulation results show that, the multi-agent cooperation algorithm can realize the control of variously different devices at the same time and achieve multi-objective cooperation optimization of BES. Moreover, the proposed approach reduced the uncomfortable duration by 84%, the unconsumed amount of renewable energy by 43%, and the energy cost by 8% compared with the conventional rule-based control approach.

Suggested Citation

  • Shen, Rendong & Zhong, Shengyuan & Wen, Xin & An, Qingsong & Zheng, Ruifan & Li, Yang & Zhao, Jun, 2022. "Multi-agent deep reinforcement learning optimization framework for building energy system with renewable energy," Applied Energy, Elsevier, vol. 312(C).
  • Handle: RePEc:eee:appene:v:312:y:2022:i:c:s0306261922001829
    DOI: 10.1016/j.apenergy.2022.118724
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0306261922001829
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.apenergy.2022.118724?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Zhong, Shengyuan & Wang, Xiaoyuan & Zhao, Jun & Li, Wenjia & Li, Hao & Wang, Yongzhen & Deng, Shuai & Zhu, Jiebei, 2021. "Deep reinforcement learning framework for dynamic pricing demand response of regenerative electric heating," Applied Energy, Elsevier, vol. 288(C).
    2. Kou, Peng & Liang, Deliang & Wang, Chen & Wu, Zihao & Gao, Lin, 2020. "Safe deep reinforcement learning-based constrained optimal control scheme for active distribution networks," Applied Energy, Elsevier, vol. 264(C).
    3. Li, Jiawen & Yu, Tao & Zhang, Xiaoshun & Li, Fusheng & Lin, Dan & Zhu, Hanxin, 2021. "Efficient experience replay based deep deterministic policy gradient for AGC dispatch in integrated energy system," Applied Energy, Elsevier, vol. 285(C).
    4. Biemann, Marco & Scheller, Fabian & Liu, Xiufeng & Huang, Lizhen, 2021. "Experimental evaluation of model-free reinforcement learning algorithms for continuous HVAC control," Applied Energy, Elsevier, vol. 298(C).
    5. Kazmi, Hussain & Mehmood, Fahad & Lodeweyckx, Stefan & Driesen, Johan, 2018. "Gigawatt-hour scale savings on a budget of zero: Deep reinforcement learning based optimal control of hot water systems," Energy, Elsevier, vol. 144(C), pages 159-168.
    6. Yang, Ting & Zhao, Liyuan & Li, Wei & Wu, Jianzhong & Zomaya, Albert Y., 2021. "Towards healthy and cost-effective indoor environment management in smart homes: A deep reinforcement learning approach," Applied Energy, Elsevier, vol. 300(C).
    7. Lu, Renzhi & Li, Yi-Chang & Li, Yuting & Jiang, Junhui & Ding, Yuemin, 2020. "Multi-agent deep reinforcement learning based demand response for discrete manufacturing systems energy management," Applied Energy, Elsevier, vol. 276(C).
    8. Zhong, Shengyuan & Zhao, Jun & Li, Wenjia & Li, Hao & Deng, Shuai & Li, Yang & Hussain, Sajjad & Wang, Xiaoyuan & Zhu, Jiebei, 2021. "Quantitative analysis of information interaction in building energy systems based on mutual information," Energy, Elsevier, vol. 214(C).
    9. Yang, Lei & Nagy, Zoltan & Goffin, Philippe & Schlueter, Arno, 2015. "Reinforcement learning for optimal control of low exergy buildings," Applied Energy, Elsevier, vol. 156(C), pages 577-586.
    10. García Kerdan, Iván & Morillón Gálvez, David, 2020. "Artificial neural network structure optimisation for accurately prediction of exergy, comfort and life cycle cost performance of a low energy building," Applied Energy, Elsevier, vol. 280(C).
    11. Jiang, C.X. & Jing, Z.X. & Cui, X.R. & Ji, T.Y. & Wu, Q.H., 2018. "Multiple agents and reinforcement learning for modelling charging loads of electric taxis," Applied Energy, Elsevier, vol. 222(C), pages 158-168.
    12. Buonomano, A. & Calise, F. & Cappiello, F.L. & Palombo, A. & Vicidomini, M., 2019. "Dynamic analysis of the integration of electric vehicles in efficient buildings fed by renewables," Applied Energy, Elsevier, vol. 245(C), pages 31-50.
    13. Gasser, Jan & Cai, Hanmin & Karagiannopoulos, Stavros & Heer, Philipp & Hug, Gabriela, 2021. "Predictive energy management of residential buildings while self-reporting flexibility envelope," Applied Energy, Elsevier, vol. 288(C).
    14. Wu, Jingda & He, Hongwen & Peng, Jiankun & Li, Yuecheng & Li, Zhanjiang, 2018. "Continuous reinforcement learning of energy management with deep Q network for a power split hybrid electric bus," Applied Energy, Elsevier, vol. 222(C), pages 799-811.
    15. Wu, Wenbo & Dong, Bing & Wang, Qi (Ryan) & Kong, Meng & Yan, Da & An, Jingjing & Liu, Yapan, 2020. "A novel mobility-based approach to derive urban-scale building occupant profiles and analyze impacts on building energy consumption," Applied Energy, Elsevier, vol. 278(C).
    16. Yin, Linfei & Wu, Yunzhi, 2022. "Mode-decomposition memory reinforcement network strategy for smart generation control in multi-area power systems containing renewable energy," Applied Energy, Elsevier, vol. 307(C).
    17. Vázquez-Canteli, José R. & Nagy, Zoltán, 2019. "Reinforcement learning for demand response: A review of algorithms and modeling techniques," Applied Energy, Elsevier, vol. 235(C), pages 1072-1089.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Gao, Yuan & Matsunami, Yuki & Miyata, Shohei & Akashi, Yasunori, 2022. "Multi-agent reinforcement learning dealing with hybrid action spaces: A case study for off-grid oriented renewable building energy system," Applied Energy, Elsevier, vol. 326(C).
    2. Zhou, Yuekuan, 2023. "A dynamic self-learning grid-responsive strategy for battery sharing economy—multi-objective optimisation and posteriori multi-criteria decision making," Energy, Elsevier, vol. 266(C).
    3. Jiang, Yuliang & Zhu, Shanying & Xu, Qimin & Yang, Bo & Guan, Xinping, 2023. "Hybrid modeling-based temperature and humidity adaptive control for a multi-zone HVAC system," Applied Energy, Elsevier, vol. 334(C).
    4. Wenya Xu & Yanxue Li & Guanjie He & Yang Xu & Weijun Gao, 2023. "Performance Assessment and Comparative Analysis of Photovoltaic-Battery System Scheduling in an Existing Zero-Energy House Based on Reinforcement Learning Control," Energies, MDPI, vol. 16(13), pages 1-19, June.
    5. Ayas Shaqour & Aya Hagishima, 2022. "Systematic Review on Deep Reinforcement Learning-Based Energy Management for Different Building Types," Energies, MDPI, vol. 15(22), pages 1-27, November.
    6. Duan, Pengfei & Zhao, Bingxu & Zhang, Xinghui & Fen, Mengdan, 2023. "A day-ahead optimal operation strategy for integrated energy systems in multi-public buildings based on cooperative game," Energy, Elsevier, vol. 275(C).
    7. Zhang, Bin & Hu, Weihao & Ghias, Amer M.Y.M. & Xu, Xiao & Chen, Zhe, 2022. "Multi-agent deep reinforcement learning-based coordination control for grid-aware multi-buildings," Applied Energy, Elsevier, vol. 328(C).
    8. Mudhafar Al-Saadi & Maher Al-Greer & Michael Short, 2023. "Reinforcement Learning-Based Intelligent Control Strategies for Optimal Power Management in Advanced Power Distribution Systems: A Survey," Energies, MDPI, vol. 16(4), pages 1-38, February.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Gao, Yuan & Matsunami, Yuki & Miyata, Shohei & Akashi, Yasunori, 2022. "Multi-agent reinforcement learning dealing with hybrid action spaces: A case study for off-grid oriented renewable building energy system," Applied Energy, Elsevier, vol. 326(C).
    2. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    3. Homod, Raad Z. & Togun, Hussein & Kadhim Hussein, Ahmed & Noraldeen Al-Mousawi, Fadhel & Yaseen, Zaher Mundher & Al-Kouz, Wael & Abd, Haider J. & Alawi, Omer A. & Goodarzi, Marjan & Hussein, Omar A., 2022. "Dynamics analysis of a novel hybrid deep clustering for unsupervised learning by reinforcement of multi-agent to energy saving in intelligent buildings," Applied Energy, Elsevier, vol. 313(C).
    4. Vázquez-Canteli, José R. & Nagy, Zoltán, 2019. "Reinforcement learning for demand response: A review of algorithms and modeling techniques," Applied Energy, Elsevier, vol. 235(C), pages 1072-1089.
    5. Seppo Sierla & Heikki Ihasalo & Valeriy Vyatkin, 2022. "A Review of Reinforcement Learning Applications to Control of Heating, Ventilation and Air Conditioning Systems," Energies, MDPI, vol. 15(10), pages 1-25, May.
    6. Correa-Jullian, Camila & López Droguett, Enrique & Cardemil, José Miguel, 2020. "Operation scheduling in a solar thermal system: A reinforcement learning-based framework," Applied Energy, Elsevier, vol. 268(C).
    7. Pinto, Giuseppe & Kathirgamanathan, Anjukan & Mangina, Eleni & Finn, Donal P. & Capozzoli, Alfonso, 2022. "Enhancing energy management in grid-interactive buildings: A comparison among cooperative and coordinated architectures," Applied Energy, Elsevier, vol. 310(C).
    8. Wang, Zhe & Hong, Tianzhen, 2020. "Reinforcement learning for building controls: The opportunities and challenges," Applied Energy, Elsevier, vol. 269(C).
    9. Xie, Jiahan & Ajagekar, Akshay & You, Fengqi, 2023. "Multi-Agent attention-based deep reinforcement learning for demand response in grid-responsive buildings," Applied Energy, Elsevier, vol. 342(C).
    10. Eduardo J. Salazar & Mauro Jurado & Mauricio E. Samper, 2023. "Reinforcement Learning-Based Pricing and Incentive Strategy for Demand Response in Smart Grids," Energies, MDPI, vol. 16(3), pages 1-33, February.
    11. Ramya Kuppusamy & Srete Nikolovski & Yuvaraja Teekaraman, 2023. "Review of Machine Learning Techniques for Power Quality Performance Evaluation in Grid-Connected Systems," Sustainability, MDPI, vol. 15(20), pages 1-29, October.
    12. Wang, Yi & Qiu, Dawei & Strbac, Goran, 2022. "Multi-agent deep reinforcement learning for resilience-driven routing and scheduling of mobile energy storage systems," Applied Energy, Elsevier, vol. 310(C).
    13. Gokhale, Gargya & Claessens, Bert & Develder, Chris, 2022. "Physics informed neural networks for control oriented thermal modeling of buildings," Applied Energy, Elsevier, vol. 314(C).
    14. Pinto, Giuseppe & Piscitelli, Marco Savino & Vázquez-Canteli, José Ramón & Nagy, Zoltán & Capozzoli, Alfonso, 2021. "Coordinated energy management for a cluster of buildings through deep reinforcement learning," Energy, Elsevier, vol. 229(C).
    15. Sabarathinam Srinivasan & Suresh Kumarasamy & Zacharias E. Andreadakis & Pedro G. Lind, 2023. "Artificial Intelligence and Mathematical Models of Power Grids Driven by Renewable Energy Sources: A Survey," Energies, MDPI, vol. 16(14), pages 1-56, July.
    16. Zhu, Ziqing & Hu, Ze & Chan, Ka Wing & Bu, Siqi & Zhou, Bin & Xia, Shiwei, 2023. "Reinforcement learning in deregulated energy market: A comprehensive review," Applied Energy, Elsevier, vol. 329(C).
    17. Zhong, Shengyuan & Wang, Xiaoyuan & Zhao, Jun & Li, Wenjia & Li, Hao & Wang, Yongzhen & Deng, Shuai & Zhu, Jiebei, 2021. "Deep reinforcement learning framework for dynamic pricing demand response of regenerative electric heating," Applied Energy, Elsevier, vol. 288(C).
    18. Pinto, Giuseppe & Deltetto, Davide & Capozzoli, Alfonso, 2021. "Data-driven district energy management with surrogate models and deep reinforcement learning," Applied Energy, Elsevier, vol. 304(C).
    19. Touzani, Samir & Prakash, Anand Krishnan & Wang, Zhe & Agarwal, Shreya & Pritoni, Marco & Kiran, Mariam & Brown, Richard & Granderson, Jessica, 2021. "Controlling distributed energy resources via deep reinforcement learning for load flexibility and energy efficiency," Applied Energy, Elsevier, vol. 304(C).
    20. Zhu, Dafeng & Yang, Bo & Liu, Yuxiang & Wang, Zhaojian & Ma, Kai & Guan, Xinping, 2022. "Energy management based on multi-agent deep reinforcement learning for a multi-energy industrial park," Applied Energy, Elsevier, vol. 311(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:312:y:2022:i:c:s0306261922001829. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.