IDEAS home Printed from https://ideas.repec.org/a/gam/jeners/v15y2022i22p8663-d976978.html
   My bibliography  Save this article

Systematic Review on Deep Reinforcement Learning-Based Energy Management for Different Building Types

Author

Listed:
  • Ayas Shaqour

    (Interdisciplinary Graduate School of Engineering Sciences, Kyushu University, Kasuga City 816-8580, Japan)

  • Aya Hagishima

    (Interdisciplinary Graduate School of Engineering Sciences, Kyushu University, Kasuga City 816-8580, Japan)

Abstract

Owing to the high energy demand of buildings, which accounted for 36% of the global share in 2020, they are one of the core targets for energy-efficiency research and regulations. Hence, coupled with the increasing complexity of decentralized power grids and high renewable energy penetration, the inception of smart buildings is becoming increasingly urgent. Data-driven building energy management systems (BEMS) based on deep reinforcement learning (DRL) have attracted significant research interest, particularly in recent years, primarily owing to their ability to overcome many of the challenges faced by conventional control methods related to real-time building modelling, multi-objective optimization, and the generalization of BEMS for efficient wide deployment. A PRISMA-based systematic assessment of a large database of 470 papers was conducted to review recent advancements in DRL-based BEMS for different building types, their research directions, and knowledge gaps. Five building types were identified: residential, offices, educational, data centres, and other commercial buildings. Their comparative analysis was conducted based on the types of appliances and systems controlled by the BEMS, renewable energy integration, DR, and unique system objectives other than energy, such as cost, and comfort. Moreover, it is worth considering that only approximately 11% of the recent research considers real system implementations.

Suggested Citation

  • Ayas Shaqour & Aya Hagishima, 2022. "Systematic Review on Deep Reinforcement Learning-Based Energy Management for Different Building Types," Energies, MDPI, vol. 15(22), pages 1-27, November.
  • Handle: RePEc:gam:jeners:v:15:y:2022:i:22:p:8663-:d:976978
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1996-1073/15/22/8663/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1996-1073/15/22/8663/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Yassine Chemingui & Adel Gastli & Omar Ellabban, 2020. "Reinforcement Learning-Based School Energy Management System," Energies, MDPI, vol. 13(23), pages 1-21, December.
    2. Lei, Yue & Zhan, Sicheng & Ono, Eikichi & Peng, Yuzhen & Zhang, Zhiang & Hasama, Takamasa & Chong, Adrian, 2022. "A practical deep reinforcement learning framework for multivariate occupant-centric control in buildings," Applied Energy, Elsevier, vol. 324(C).
    3. Lork, Clement & Li, Wen-Tai & Qin, Yan & Zhou, Yuren & Yuen, Chau & Tushar, Wayes & Saha, Tapan K., 2020. "An uncertainty-aware deep reinforcement learning framework for residential air conditioning energy management," Applied Energy, Elsevier, vol. 276(C).
    4. Kuldeep Kurte & Jeffrey Munk & Olivera Kotevska & Kadir Amasyali & Robert Smith & Evan McKee & Yan Du & Borui Cui & Teja Kuruganti & Helia Zandi, 2020. "Evaluating the Adaptability of Reinforcement Learning Based HVAC Control for Residential Houses," Sustainability, MDPI, vol. 12(18), pages 1-38, September.
    5. Mahbod, Muhammad Haiqal Bin & Chng, Chin Boon & Lee, Poh Seng & Chui, Chee Kong, 2022. "Energy saving evaluation of an energy efficient data center using a model-free reinforcement learning approach," Applied Energy, Elsevier, vol. 322(C).
    6. Jacopo Torriti, 2017. "The Risk of Residential Peak Electricity Demand: A Comparison of Five European Countries," Energies, MDPI, vol. 10(3), pages 1-14, March.
    7. Touzani, Samir & Prakash, Anand Krishnan & Wang, Zhe & Agarwal, Shreya & Pritoni, Marco & Kiran, Mariam & Brown, Richard & Granderson, Jessica, 2021. "Controlling distributed energy resources via deep reinforcement learning for load flexibility and energy efficiency," Applied Energy, Elsevier, vol. 304(C).
    8. Yujian Ye & Dawei Qiu & Huiyu Wang & Yi Tang & Goran Strbac, 2021. "Real-Time Autonomous Residential Demand Response Management Based on Twin Delayed Deep Deterministic Policy Gradient Learning," Energies, MDPI, vol. 14(3), pages 1-22, January.
    9. Heidari, Amirreza & Maréchal, François & Khovalyg, Dolaana, 2022. "An occupant-centric control framework for balancing comfort, energy use and hygiene in hot water systems: A model-free reinforcement learning approach," Applied Energy, Elsevier, vol. 312(C).
    10. Wang, Zhe & Hong, Tianzhen, 2020. "Reinforcement learning for building controls: The opportunities and challenges," Applied Energy, Elsevier, vol. 269(C).
    11. Biemann, Marco & Scheller, Fabian & Liu, Xiufeng & Huang, Lizhen, 2021. "Experimental evaluation of model-free reinforcement learning algorithms for continuous HVAC control," Applied Energy, Elsevier, vol. 298(C).
    12. Pinto, Giuseppe & Kathirgamanathan, Anjukan & Mangina, Eleni & Finn, Donal P. & Capozzoli, Alfonso, 2022. "Enhancing energy management in grid-interactive buildings: A comparison among cooperative and coordinated architectures," Applied Energy, Elsevier, vol. 310(C).
    13. Svetozarevic, B. & Baumann, C. & Muntwiler, S. & Di Natale, L. & Zeilinger, M.N. & Heer, P., 2022. "Data-driven control of room temperature and bidirectional EV charging using deep reinforcement learning: Simulations and experiments," Applied Energy, Elsevier, vol. 307(C).
    14. Perera, A.T.D. & Kamalaruban, Parameswaran, 2021. "Applications of reinforcement learning in energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 137(C).
    15. Gao, Yuan & Matsunami, Yuki & Miyata, Shohei & Akashi, Yasunori, 2022. "Operational optimization for off-grid renewable building energy system using deep reinforcement learning," Applied Energy, Elsevier, vol. 325(C).
    16. Pinto, Giuseppe & Deltetto, Davide & Capozzoli, Alfonso, 2021. "Data-driven district energy management with surrogate models and deep reinforcement learning," Applied Energy, Elsevier, vol. 304(C).
    17. Heidari, Amirreza & Maréchal, François & Khovalyg, Dolaana, 2022. "Reinforcement Learning for proactive operation of residential energy systems by learning stochastic occupant behavior and fluctuating solar energy: Balancing comfort, hygiene and energy use," Applied Energy, Elsevier, vol. 318(C).
    18. Arroyo, Javier & Manna, Carlo & Spiessens, Fred & Helsen, Lieve, 2022. "Reinforced model predictive control (RL-MPC) for building energy management," Applied Energy, Elsevier, vol. 309(C).
    19. Davide Deltetto & Davide Coraci & Giuseppe Pinto & Marco Savino Piscitelli & Alfonso Capozzoli, 2021. "Exploring the Potentialities of Deep Reinforcement Learning for Incentive-Based Demand Response in a Cluster of Small Commercial Buildings," Energies, MDPI, vol. 14(10), pages 1-25, May.
    20. Zhou, Xinlei & Lin, Wenye & Kumar, Ritunesh & Cui, Ping & Ma, Zhenjun, 2022. "A data-driven strategy using long short term memory models and reinforcement learning to predict building electricity consumption," Applied Energy, Elsevier, vol. 306(PB).
    21. Yang, Ting & Zhao, Liyuan & Li, Wei & Wu, Jianzhong & Zomaya, Albert Y., 2021. "Towards healthy and cost-effective indoor environment management in smart homes: A deep reinforcement learning approach," Applied Energy, Elsevier, vol. 300(C).
    22. Shen, Rendong & Zhong, Shengyuan & Wen, Xin & An, Qingsong & Zheng, Ruifan & Li, Yang & Zhao, Jun, 2022. "Multi-agent deep reinforcement learning optimization framework for building energy system with renewable energy," Applied Energy, Elsevier, vol. 312(C).
    23. Alessandro Liberati & Douglas G Altman & Jennifer Tetzlaff & Cynthia Mulrow & Peter C Gøtzsche & John P A Ioannidis & Mike Clarke & P J Devereaux & Jos Kleijnen & David Moher, 2009. "The PRISMA Statement for Reporting Systematic Reviews and Meta-Analyses of Studies That Evaluate Health Care Interventions: Explanation and Elaboration," PLOS Medicine, Public Library of Science, vol. 6(7), pages 1-28, July.
    24. Davide Coraci & Silvio Brandi & Marco Savino Piscitelli & Alfonso Capozzoli, 2021. "Online Implementation of a Soft Actor-Critic Agent to Enhance Indoor Temperature Control and Energy Efficiency in Buildings," Energies, MDPI, vol. 14(4), pages 1-26, February.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Dimitrios Vamvakas & Panagiotis Michailidis & Christos Korkas & Elias Kosmatopoulos, 2023. "Review and Evaluation of Reinforcement Learning Frameworks on Smart Grid Applications," Energies, MDPI, vol. 16(14), pages 1-38, July.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    2. Gao, Yuan & Matsunami, Yuki & Miyata, Shohei & Akashi, Yasunori, 2022. "Multi-agent reinforcement learning dealing with hybrid action spaces: A case study for off-grid oriented renewable building energy system," Applied Energy, Elsevier, vol. 326(C).
    3. Homod, Raad Z. & Togun, Hussein & Kadhim Hussein, Ahmed & Noraldeen Al-Mousawi, Fadhel & Yaseen, Zaher Mundher & Al-Kouz, Wael & Abd, Haider J. & Alawi, Omer A. & Goodarzi, Marjan & Hussein, Omar A., 2022. "Dynamics analysis of a novel hybrid deep clustering for unsupervised learning by reinforcement of multi-agent to energy saving in intelligent buildings," Applied Energy, Elsevier, vol. 313(C).
    4. Seppo Sierla & Heikki Ihasalo & Valeriy Vyatkin, 2022. "A Review of Reinforcement Learning Applications to Control of Heating, Ventilation and Air Conditioning Systems," Energies, MDPI, vol. 15(10), pages 1-25, May.
    5. Nweye, Kingsley & Sankaranarayanan, Siva & Nagy, Zoltan, 2023. "MERLIN: Multi-agent offline and transfer learning for occupant-centric operation of grid-interactive communities," Applied Energy, Elsevier, vol. 346(C).
    6. Pinto, Giuseppe & Kathirgamanathan, Anjukan & Mangina, Eleni & Finn, Donal P. & Capozzoli, Alfonso, 2022. "Enhancing energy management in grid-interactive buildings: A comparison among cooperative and coordinated architectures," Applied Energy, Elsevier, vol. 310(C).
    7. Zhang, Bin & Hu, Weihao & Ghias, Amer M.Y.M. & Xu, Xiao & Chen, Zhe, 2022. "Multi-agent deep reinforcement learning-based coordination control for grid-aware multi-buildings," Applied Energy, Elsevier, vol. 328(C).
    8. Blad, C. & Bøgh, S. & Kallesøe, C. & Raftery, Paul, 2023. "A laboratory test of an Offline-trained Multi-Agent Reinforcement Learning Algorithm for Heating Systems," Applied Energy, Elsevier, vol. 337(C).
    9. Wenya Xu & Yanxue Li & Guanjie He & Yang Xu & Weijun Gao, 2023. "Performance Assessment and Comparative Analysis of Photovoltaic-Battery System Scheduling in an Existing Zero-Energy House Based on Reinforcement Learning Control," Energies, MDPI, vol. 16(13), pages 1-19, June.
    10. Panagiotis Michailidis & Iakovos Michailidis & Dimitrios Vamvakas & Elias Kosmatopoulos, 2023. "Model-Free HVAC Control in Buildings: A Review," Energies, MDPI, vol. 16(20), pages 1-45, October.
    11. Li, Yanxue & Wang, Zixuan & Xu, Wenya & Gao, Weijun & Xu, Yang & Xiao, Fu, 2023. "Modeling and energy dynamic control for a ZEH via hybrid model-based deep reinforcement learning," Energy, Elsevier, vol. 277(C).
    12. Keerthana Sivamayil & Elakkiya Rajasekar & Belqasem Aljafari & Srete Nikolovski & Subramaniyaswamy Vairavasundaram & Indragandhi Vairavasundaram, 2023. "A Systematic Study on Reinforcement Learning Based Applications," Energies, MDPI, vol. 16(3), pages 1-23, February.
    13. Pinto, Giuseppe & Deltetto, Davide & Capozzoli, Alfonso, 2021. "Data-driven district energy management with surrogate models and deep reinforcement learning," Applied Energy, Elsevier, vol. 304(C).
    14. Di Natale, L. & Svetozarevic, B. & Heer, P. & Jones, C.N., 2023. "Towards scalable physically consistent neural networks: An application to data-driven multi-zone thermal building models," Applied Energy, Elsevier, vol. 340(C).
    15. Zhou, Xinlei & Xue, Shan & Du, Han & Ma, Zhenjun, 2023. "Optimization of building demand flexibility using reinforcement learning and rule-based expert systems," Applied Energy, Elsevier, vol. 350(C).
    16. Fang, Xi & Gong, Guangcai & Li, Guannan & Chun, Liang & Peng, Pei & Li, Wenqiang & Shi, Xing, 2023. "Cross temporal-spatial transferability investigation of deep reinforcement learning control strategy in the building HVAC system level," Energy, Elsevier, vol. 263(PB).
    17. Song, Yuguang & Xia, Mingchao & Chen, Qifang & Chen, Fangjian, 2023. "A data-model fusion dispatch strategy for the building energy flexibility based on the digital twin," Applied Energy, Elsevier, vol. 332(C).
    18. Zhuang, Dian & Gan, Vincent J.L. & Duygu Tekler, Zeynep & Chong, Adrian & Tian, Shuai & Shi, Xing, 2023. "Data-driven predictive control for smart HVAC system in IoT-integrated buildings with time-series forecasting and reinforcement learning," Applied Energy, Elsevier, vol. 338(C).
    19. Coraci, Davide & Brandi, Silvio & Hong, Tianzhen & Capozzoli, Alfonso, 2023. "Online transfer learning strategy for enhancing the scalability and deployment of deep reinforcement learning control in smart buildings," Applied Energy, Elsevier, vol. 333(C).
    20. Amir Ali Safaei Pirooz & Mohammad J. Sanjari & Young-Jin Kim & Stuart Moore & Richard Turner & Wayne W. Weaver & Dipti Srinivasan & Josep M. Guerrero & Mohammad Shahidehpour, 2023. "Adaptation of High Spatio-Temporal Resolution Weather/Load Forecast in Real-World Distributed Energy-System Operation," Energies, MDPI, vol. 16(8), pages 1-16, April.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jeners:v:15:y:2022:i:22:p:8663-:d:976978. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.