IDEAS home Printed from https://ideas.repec.org/a/eee/appene/v269y2020ics0306261920305481.html
   My bibliography  Save this article

Reinforcement learning for building controls: The opportunities and challenges

Author

Listed:
  • Wang, Zhe
  • Hong, Tianzhen

Abstract

Building controls are becoming more important and complicated due to the dynamic and stochastic energy demand, on-site intermittent energy supply, as well as energy storage, making it difficult for them to be optimized by conventional control techniques. Reinforcement Learning (RL), as an emerging control technique, has attracted growing research interest and demonstrated its potential to enhance building performance while addressing some limitations of other advanced control techniques, such as model predictive control. This study conducted a comprehensive review of existing studies that applied RL for building controls. It provided a detailed breakdown of the existing RL studies that use a specific variation of each major component of the Reinforcement Learning: algorithm, state, action, reward, and environment. We found RL for building controls is still in the research stage with limited applications (11%) in real buildings. Three significant barriers prevent the adoption of RL controllers in actual building controls: (1) the training process is time consuming and data demanding, (2) the control security and robustness need to be enhanced, and (3) the generalization capabilities of RL controllers need to be improved using approaches such as transfer learning. Future research may focus on developing RL controllers that could be used in real buildings, addressing current RL challenges, such as accelerating training and enhancing control robustness, as well as developing an open-source testbed and dataset for performance benchmarking of RL controllers.

Suggested Citation

  • Wang, Zhe & Hong, Tianzhen, 2020. "Reinforcement learning for building controls: The opportunities and challenges," Applied Energy, Elsevier, vol. 269(C).
  • Handle: RePEc:eee:appene:v:269:y:2020:i:c:s0306261920305481
    DOI: 10.1016/j.apenergy.2020.115036
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0306261920305481
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.apenergy.2020.115036?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Kazmi, Hussain & Suykens, Johan & Balint, Attila & Driesen, Johan, 2019. "Multi-agent reinforcement learning for modeling and control of thermostatically controlled loads," Applied Energy, Elsevier, vol. 238(C), pages 1022-1035.
    2. Yang, Lei & Nagy, Zoltan & Goffin, Philippe & Schlueter, Arno, 2015. "Reinforcement learning for optimal control of low exergy buildings," Applied Energy, Elsevier, vol. 156(C), pages 577-586.
    3. Kazmi, H. & D’Oca, S. & Delmastro, C. & Lodeweyckx, S. & Corgnati, S.P., 2016. "Generalizable occupant-driven optimization model for domestic hot water production in NZEB," Applied Energy, Elsevier, vol. 175(C), pages 1-15.
    4. Blum, D.H. & Arendt, K. & Rivalin, L. & Piette, M.A. & Wetter, M. & Veje, C.T., 2019. "Practical factors of envelope model setup and their effects on the performance of model predictive control for building heating, ventilating, and air conditioning systems," Applied Energy, Elsevier, vol. 236(C), pages 410-425.
    5. Chen, Yujiao & Tong, Zheming & Wu, Wentao & Samuelson, Holly & Malkawi, Ali & Norford, Leslie, 2019. "Achieving natural ventilation potential in practice: Control schemes and levels of automation," Applied Energy, Elsevier, vol. 235(C), pages 1141-1152.
    6. Brida V. Mbuwir & Frederik Ruelens & Fred Spiessens & Geert Deconinck, 2017. "Battery Energy Management in a Microgrid Using Batch Reinforcement Learning," Energies, MDPI, vol. 10(11), pages 1-19, November.
    7. Zhang, Xiaoshun & Bao, Tao & Yu, Tao & Yang, Bo & Han, Chuanjia, 2017. "Deep transfer Q-learning with virtual leader-follower for supply-demand Stackelberg game of smart grid," Energy, Elsevier, vol. 133(C), pages 348-365.
    8. Kazmi, Hussain & Mehmood, Fahad & Lodeweyckx, Stefan & Driesen, Johan, 2018. "Gigawatt-hour scale savings on a budget of zero: Deep reinforcement learning based optimal control of hot water systems," Energy, Elsevier, vol. 144(C), pages 159-168.
    9. Vázquez-Canteli, José R. & Nagy, Zoltán, 2019. "Reinforcement learning for demand response: A review of algorithms and modeling techniques," Applied Energy, Elsevier, vol. 235(C), pages 1072-1089.
    10. Georgios D. Kontes & Georgios I. Giannakis & Víctor Sánchez & Pablo De Agustin-Camacho & Ander Romero-Amorrortu & Natalia Panagiotidou & Dimitrios V. Rovas & Simone Steiger & Christopher Mutschler & G, 2018. "Simulation-Based Evaluation and Optimization of Control Strategies in Buildings," Energies, MDPI, vol. 11(12), pages 1-23, December.
    11. Frederik Ruelens & Sandro Iacovella & Bert J. Claessens & Ronnie Belmans, 2015. "Learning Agent for a Heat-Pump Thermostat with a Set-Back Strategy Using Model-Free Reinforcement Learning," Energies, MDPI, vol. 8(8), pages 1-19, August.
    12. Zhen Zhang & Cheng Ma & Rong Zhu, 2018. "Thermal and Energy Management Based on Bimodal Airflow-Temperature Sensing and Reinforcement Learning," Energies, MDPI, vol. 11(10), pages 1-14, September.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Vázquez-Canteli, José R. & Nagy, Zoltán, 2019. "Reinforcement learning for demand response: A review of algorithms and modeling techniques," Applied Energy, Elsevier, vol. 235(C), pages 1072-1089.
    2. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    3. Perera, A.T.D. & Kamalaruban, Parameswaran, 2021. "Applications of reinforcement learning in energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 137(C).
    4. Gao, Yuan & Matsunami, Yuki & Miyata, Shohei & Akashi, Yasunori, 2022. "Operational optimization for off-grid renewable building energy system using deep reinforcement learning," Applied Energy, Elsevier, vol. 325(C).
    5. Haji Hosseinloo, Ashkan & Ryzhov, Alexander & Bischi, Aldo & Ouerdane, Henni & Turitsyn, Konstantin & Dahleh, Munther A., 2020. "Data-driven control of micro-climate in buildings: An event-triggered reinforcement learning approach," Applied Energy, Elsevier, vol. 277(C).
    6. Correa-Jullian, Camila & López Droguett, Enrique & Cardemil, José Miguel, 2020. "Operation scheduling in a solar thermal system: A reinforcement learning-based framework," Applied Energy, Elsevier, vol. 268(C).
    7. Pinto, Giuseppe & Piscitelli, Marco Savino & Vázquez-Canteli, José Ramón & Nagy, Zoltán & Capozzoli, Alfonso, 2021. "Coordinated energy management for a cluster of buildings through deep reinforcement learning," Energy, Elsevier, vol. 229(C).
    8. Pinto, Giuseppe & Deltetto, Davide & Capozzoli, Alfonso, 2021. "Data-driven district energy management with surrogate models and deep reinforcement learning," Applied Energy, Elsevier, vol. 304(C).
    9. Charbonnier, Flora & Morstyn, Thomas & McCulloch, Malcolm D., 2022. "Coordination of resources at the edge of the electricity grid: Systematic review and taxonomy," Applied Energy, Elsevier, vol. 318(C).
    10. Shen, Rendong & Zhong, Shengyuan & Wen, Xin & An, Qingsong & Zheng, Ruifan & Li, Yang & Zhao, Jun, 2022. "Multi-agent deep reinforcement learning optimization framework for building energy system with renewable energy," Applied Energy, Elsevier, vol. 312(C).
    11. Kathirgamanathan, Anjukan & De Rosa, Mattia & Mangina, Eleni & Finn, Donal P., 2021. "Data-driven predictive control for unlocking building energy flexibility: A review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 135(C).
    12. Paiho, Satu & Kiljander, Jussi & Sarala, Roope & Siikavirta, Hanne & Kilkki, Olli & Bajpai, Arpit & Duchon, Markus & Pahl, Marc-Oliver & Wüstrich, Lars & Lübben, Christian & Kirdan, Erkin & Schindler,, 2021. "Towards cross-commodity energy-sharing communities – A review of the market, regulatory, and technical situation," Renewable and Sustainable Energy Reviews, Elsevier, vol. 151(C).
    13. Seppo Sierla & Heikki Ihasalo & Valeriy Vyatkin, 2022. "A Review of Reinforcement Learning Applications to Control of Heating, Ventilation and Air Conditioning Systems," Energies, MDPI, vol. 15(10), pages 1-25, May.
    14. Yassine Chemingui & Adel Gastli & Omar Ellabban, 2020. "Reinforcement Learning-Based School Energy Management System," Energies, MDPI, vol. 13(23), pages 1-21, December.
    15. Gokhale, Gargya & Claessens, Bert & Develder, Chris, 2022. "Physics informed neural networks for control oriented thermal modeling of buildings," Applied Energy, Elsevier, vol. 314(C).
    16. Davide Coraci & Silvio Brandi & Marco Savino Piscitelli & Alfonso Capozzoli, 2021. "Online Implementation of a Soft Actor-Critic Agent to Enhance Indoor Temperature Control and Energy Efficiency in Buildings," Energies, MDPI, vol. 14(4), pages 1-26, February.
    17. Pinto, Giuseppe & Kathirgamanathan, Anjukan & Mangina, Eleni & Finn, Donal P. & Capozzoli, Alfonso, 2022. "Enhancing energy management in grid-interactive buildings: A comparison among cooperative and coordinated architectures," Applied Energy, Elsevier, vol. 310(C).
    18. Touzani, Samir & Prakash, Anand Krishnan & Wang, Zhe & Agarwal, Shreya & Pritoni, Marco & Kiran, Mariam & Brown, Richard & Granderson, Jessica, 2021. "Controlling distributed energy resources via deep reinforcement learning for load flexibility and energy efficiency," Applied Energy, Elsevier, vol. 304(C).
    19. Hernandez-Matheus, Alejandro & Löschenbrand, Markus & Berg, Kjersti & Fuchs, Ida & Aragüés-Peñalba, Mònica & Bullich-Massagué, Eduard & Sumper, Andreas, 2022. "A systematic review of machine learning techniques related to local energy communities," Renewable and Sustainable Energy Reviews, Elsevier, vol. 170(C).
    20. Omar al-Ani & Sanjoy Das & Hongyu Wu, 2023. "Imitation Learning with Deep Attentive Tabular Neural Networks for Environmental Prediction and Control in Smart Home," Energies, MDPI, vol. 16(13), pages 1-19, June.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:269:y:2020:i:c:s0306261920305481. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.