IDEAS home Printed from https://ideas.repec.org/a/eee/energy/v277y2023ics0360544223010216.html
   My bibliography  Save this article

Modeling and energy dynamic control for a ZEH via hybrid model-based deep reinforcement learning

Author

Listed:
  • Li, Yanxue
  • Wang, Zixuan
  • Xu, Wenya
  • Gao, Weijun
  • Xu, Yang
  • Xiao, Fu

Abstract

Efficient and flexible energy management strategy can play an important role in energy conservation in building sector. The model-free reinforcement learning control of building energy systems generally requires an enormous amount of training data and low learning efficiency creates an obstacle to practice. This work proposes a hybrid model-based reinforcement learning framework to optimize the indoor thermal comfort and energy cost-saving performances of a ZEH (zero energy house) space heating system using relatively short-period monitored data. The reward function is designed regarding energy cost, PV self-consumption and thermal discomfort, proposed agents can interact with the reduced-order thermodynamic model and an uncertain environment, and makes optimal control policies through the learning process. Simulation results demonstrate that proposed agents achieve efficient convergence, D3QN presents a superiority of convergence performance. To evaluate the performances of proposed algorithms, the trained agents are tested using monitored data. With learned policies, the self-learning agents could balance the needs of thermal comfort, energy cost saving and increasing on-site PV consumption compared with the baselines. The comparative analysis shows that D3QN achieved over 30% cost savings compared with measurement results. D3QN outperforms DQN and Double DQN agents in test scenarios maintaining more stable temperatures under various outside conditions.

Suggested Citation

  • Li, Yanxue & Wang, Zixuan & Xu, Wenya & Gao, Weijun & Xu, Yang & Xiao, Fu, 2023. "Modeling and energy dynamic control for a ZEH via hybrid model-based deep reinforcement learning," Energy, Elsevier, vol. 277(C).
  • Handle: RePEc:eee:energy:v:277:y:2023:i:c:s0360544223010216
    DOI: 10.1016/j.energy.2023.127627
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0360544223010216
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.energy.2023.127627?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Joanna Clarke & Justin Searle, 2021. "Active Building demonstrators for a low-carbon future," Nature Energy, Nature, vol. 6(12), pages 1087-1089, December.
    2. Bratislav Svetozarevic & Moritz Begle & Prageeth Jayathissa & Stefan Caranovic & Robert F. Shepherd & Zoltan Nagy & Illias Hischier & Johannes Hofer & Arno Schlueter, 2019. "Publisher Correction: Dynamic photovoltaic building envelopes for adaptive energy and comfort management," Nature Energy, Nature, vol. 4(8), pages 719-719, August.
    3. Touzani, Samir & Prakash, Anand Krishnan & Wang, Zhe & Agarwal, Shreya & Pritoni, Marco & Kiran, Mariam & Brown, Richard & Granderson, Jessica, 2021. "Controlling distributed energy resources via deep reinforcement learning for load flexibility and energy efficiency," Applied Energy, Elsevier, vol. 304(C).
    4. Niu, Jide & Tian, Zhe & Lu, Yakai & Zhao, Hongfang, 2019. "Flexible dispatch of a building energy system using building thermal storage and battery energy storage," Applied Energy, Elsevier, vol. 243(C), pages 274-287.
    5. Wang, Zhe & Hong, Tianzhen, 2020. "Reinforcement learning for building controls: The opportunities and challenges," Applied Energy, Elsevier, vol. 269(C).
    6. Nan Zhou & Nina Khanna & Wei Feng & Jing Ke & Mark Levine, 2018. "Scenarios of energy efficiency and CO2 emissions reduction potential in the buildings sector in China to year 2050," Nature Energy, Nature, vol. 3(11), pages 978-984, November.
    7. Khorasany, Mohsen & Shokri Gazafroudi, Amin & Razzaghi, Reza & Morstyn, Thomas & Shafie-khah, Miadreza, 2022. "A framework for participation of prosumers in peer-to-peer energy trading and flexibility markets," Applied Energy, Elsevier, vol. 314(C).
    8. Saavedra, Aldo & Negrete-Pincetic, Matias & Rodríguez, Rafael & Salgado, Marcelo & Lorca, Álvaro, 2022. "Flexible load management using flexibility bands," Applied Energy, Elsevier, vol. 317(C).
    9. Totaro, Simone & Boukas, Ioannis & Jonsson, Anders & Cornélusse, Bertrand, 2021. "Lifelong control of off-grid microgrid with model-based reinforcement learning," Energy, Elsevier, vol. 232(C).
    10. Xue, Xue & Wang, Shengwei & Sun, Yongjun & Xiao, Fu, 2014. "An interactive building power demand management strategy for facilitating smart grid optimization," Applied Energy, Elsevier, vol. 116(C), pages 297-310.
    11. Hu, Maomao & Xiao, Fu & Jørgensen, John Bagterp & Wang, Shengwei, 2019. "Frequency control of air conditioners in response to real-time dynamic electricity prices in smart grids," Applied Energy, Elsevier, vol. 242(C), pages 92-106.
    12. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    13. Xiaoyi Zhang & Weijun Gao & Yanxue Li & Zixuan Wang & Yoshiaki Ushifusa & Yingjun Ruan, 2021. "Operational Performance and Load Flexibility Analysis of Japanese Zero Energy House," IJERPH, MDPI, vol. 18(13), pages 1-19, June.
    14. Perera, A.T.D. & Kamalaruban, Parameswaran, 2021. "Applications of reinforcement learning in energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 137(C).
    15. Zhang, Wei & Wang, Jixin & Xu, Zhenyu & Shen, Yuying & Gao, Guangzong, 2022. "A generalized energy management framework for hybrid construction vehicles via model-based reinforcement learning," Energy, Elsevier, vol. 260(C).
    16. Lee, Heeyun & Kim, Kyunghyun & Kim, Namwook & Cha, Suk Won, 2022. "Energy efficient speed planning of electric vehicles for car-following scenario using model-based reinforcement learning," Applied Energy, Elsevier, vol. 313(C).
    17. Zhang, Shuo & Hu, Xiaosong & Xie, Shaobo & Song, Ziyou & Hu, Lin & Hou, Cong, 2019. "Adaptively coordinated optimization of battery aging and energy management in plug-in hybrid electric buses," Applied Energy, Elsevier, vol. 256(C).
    18. Bratislav Svetozarevic & Moritz Begle & Prageeth Jayathissa & Stefan Caranovic & Robert F. Shepherd & Zoltan Nagy & Illias Hischier & Johannes Hofer & Arno Schlueter, 2019. "Dynamic photovoltaic building envelopes for adaptive energy and comfort management," Nature Energy, Nature, vol. 4(8), pages 671-682, August.
    19. Hiroshi OHTA, 2021. "Japan’s Policy on Net Carbon Neutrality by 2050," East Asian Policy (EAP), World Scientific Publishing Co. Pte. Ltd., vol. 13(01), pages 19-32, January.
    20. Li, Yanfei & O'Neill, Zheng & Zhang, Liang & Chen, Jianli & Im, Piljae & DeGraw, Jason, 2021. "Grey-box modeling and application for building energy simulations - A critical review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 146(C).
    21. Ding, Yan & Lyu, Yacong & Lu, Shilei & Wang, Ran, 2022. "Load shifting potential assessment of building thermal storage performance for building design," Energy, Elsevier, vol. 243(C).
    22. Oliveira Panão, Marta J.N. & Mateus, Nuno M. & Carrilho da Graça, G., 2019. "Measured and modeled performance of internal mass as a thermal energy battery for energy flexible residential buildings," Applied Energy, Elsevier, vol. 239(C), pages 252-267.
    23. Afroz, Zakia & Shafiullah, GM & Urmee, Tania & Higgins, Gary, 2018. "Modeling techniques used in building HVAC control systems: A review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 83(C), pages 64-84.
    24. Nguyen, Hai-Tra & Safder, Usman & Loy-Benitez, Jorge & Yoo, ChangKyoo, 2022. "Optimal demand side management scheduling-based bidirectional regulation of energy distribution network for multi-residential demand response with self-produced renewable energy," Applied Energy, Elsevier, vol. 322(C).
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Elsisi, Mahmoud & Amer, Mohammed & Dababat, Alya’ & Su, Chun-Lien, 2023. "A comprehensive review of machine learning and IoT solutions for demand side energy management, conservation, and resilient operation," Energy, Elsevier, vol. 281(C).
    2. Wenya Xu & Yanxue Li & Guanjie He & Yang Xu & Weijun Gao, 2023. "Performance Assessment and Comparative Analysis of Photovoltaic-Battery System Scheduling in an Existing Zero-Energy House Based on Reinforcement Learning Control," Energies, MDPI, vol. 16(13), pages 1-19, June.
    3. Nik, Vahid M. & Hosseini, Mohammad, 2023. "CIRLEM: a synergic integration of Collective Intelligence and Reinforcement learning in Energy Management for enhanced climate resilience and lightweight computation," Applied Energy, Elsevier, vol. 350(C).

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    2. Harrold, Daniel J.B. & Cao, Jun & Fan, Zhong, 2022. "Data-driven battery operation for energy arbitrage using rainbow deep reinforcement learning," Energy, Elsevier, vol. 238(PC).
    3. Homod, Raad Z. & Togun, Hussein & Kadhim Hussein, Ahmed & Noraldeen Al-Mousawi, Fadhel & Yaseen, Zaher Mundher & Al-Kouz, Wael & Abd, Haider J. & Alawi, Omer A. & Goodarzi, Marjan & Hussein, Omar A., 2022. "Dynamics analysis of a novel hybrid deep clustering for unsupervised learning by reinforcement of multi-agent to energy saving in intelligent buildings," Applied Energy, Elsevier, vol. 313(C).
    4. Song, Yuguang & Xia, Mingchao & Chen, Qifang & Chen, Fangjian, 2023. "A data-model fusion dispatch strategy for the building energy flexibility based on the digital twin," Applied Energy, Elsevier, vol. 332(C).
    5. Biemann, Marco & Scheller, Fabian & Liu, Xiufeng & Huang, Lizhen, 2021. "Experimental evaluation of model-free reinforcement learning algorithms for continuous HVAC control," Applied Energy, Elsevier, vol. 298(C).
    6. Wenya Xu & Yanxue Li & Guanjie He & Yang Xu & Weijun Gao, 2023. "Performance Assessment and Comparative Analysis of Photovoltaic-Battery System Scheduling in an Existing Zero-Energy House Based on Reinforcement Learning Control," Energies, MDPI, vol. 16(13), pages 1-19, June.
    7. Ayas Shaqour & Aya Hagishima, 2022. "Systematic Review on Deep Reinforcement Learning-Based Energy Management for Different Building Types," Energies, MDPI, vol. 15(22), pages 1-27, November.
    8. Gao, Yuan & Matsunami, Yuki & Miyata, Shohei & Akashi, Yasunori, 2022. "Multi-agent reinforcement learning dealing with hybrid action spaces: A case study for off-grid oriented renewable building energy system," Applied Energy, Elsevier, vol. 326(C).
    9. Amir Ali Safaei Pirooz & Mohammad J. Sanjari & Young-Jin Kim & Stuart Moore & Richard Turner & Wayne W. Weaver & Dipti Srinivasan & Josep M. Guerrero & Mohammad Shahidehpour, 2023. "Adaptation of High Spatio-Temporal Resolution Weather/Load Forecast in Real-World Distributed Energy-System Operation," Energies, MDPI, vol. 16(8), pages 1-16, April.
    10. Qiu, Dawei & Wang, Yi & Hua, Weiqi & Strbac, Goran, 2023. "Reinforcement learning for electric vehicle applications in power systems:A critical review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 173(C).
    11. Vallianos, Charalampos & Candanedo, José & Athienitis, Andreas, 2023. "Application of a large smart thermostat dataset for model calibration and Model Predictive Control implementation in the residential sector," Energy, Elsevier, vol. 278(PA).
    12. Yassine Chemingui & Adel Gastli & Omar Ellabban, 2020. "Reinforcement Learning-Based School Energy Management System," Energies, MDPI, vol. 13(23), pages 1-21, December.
    13. Liang, Shen & Zheng, Hongfei & Wang, Xuanlin & Ma, Xinglong & Zhao, Zhiyong, 2022. "Design and performance validation on a solar louver with concentrating-photovoltaic-thermal modules," Renewable Energy, Elsevier, vol. 191(C), pages 71-83.
    14. Huang, Ruchen & He, Hongwen & Gao, Miaojue, 2023. "Training-efficient and cost-optimal energy management for fuel cell hybrid electric bus based on a novel distributed deep reinforcement learning framework," Applied Energy, Elsevier, vol. 346(C).
    15. Liu, Xiangfei & Ren, Mifeng & Yang, Zhile & Yan, Gaowei & Guo, Yuanjun & Cheng, Lan & Wu, Chengke, 2022. "A multi-step predictive deep reinforcement learning algorithm for HVAC control systems in smart buildings," Energy, Elsevier, vol. 259(C).
    16. Chen, Jiaxin & Shu, Hong & Tang, Xiaolin & Liu, Teng & Wang, Weida, 2022. "Deep reinforcement learning-based multi-objective control of hybrid power system combined with road recognition under time-varying environment," Energy, Elsevier, vol. 239(PC).
    17. Harrold, Daniel J.B. & Cao, Jun & Fan, Zhong, 2022. "Renewable energy integration and microgrid energy trading using multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 318(C).
    18. Dongsu Kim & Jongman Lee & Sunglok Do & Pedro J. Mago & Kwang Ho Lee & Heejin Cho, 2022. "Energy Modeling and Model Predictive Control for HVAC in Buildings: A Review of Current Research Trends," Energies, MDPI, vol. 15(19), pages 1-30, October.
    19. Wang, Qiaochu & Ding, Yan & Kong, Xiangfei & Tian, Zhe & Xu, Linrui & He, Qing, 2022. "Load pattern recognition based optimization method for energy flexibility in office buildings," Energy, Elsevier, vol. 254(PC).
    20. Touzani, Samir & Prakash, Anand Krishnan & Wang, Zhe & Agarwal, Shreya & Pritoni, Marco & Kiran, Mariam & Brown, Richard & Granderson, Jessica, 2021. "Controlling distributed energy resources via deep reinforcement learning for load flexibility and energy efficiency," Applied Energy, Elsevier, vol. 304(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:energy:v:277:y:2023:i:c:s0360544223010216. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.journals.elsevier.com/energy .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.