IDEAS home Printed from https://ideas.repec.org/a/gam/jeners/v15y2022i10p3526-d813404.html
   My bibliography  Save this article

A Review of Reinforcement Learning Applications to Control of Heating, Ventilation and Air Conditioning Systems

Author

Listed:
  • Seppo Sierla

    (Department of Electrical Engineering and Automation, School of Electrical Engineering, Aalto University, FI-00076 Espoo, Finland)

  • Heikki Ihasalo

    (Department of Electrical Engineering and Automation, School of Electrical Engineering, Aalto University, FI-00076 Espoo, Finland)

  • Valeriy Vyatkin

    (Department of Electrical Engineering and Automation, School of Electrical Engineering, Aalto University, FI-00076 Espoo, Finland
    Department of Computer Science, Electrical and Space Engineering, Lulea University of Technology, 97187 Lulea, Sweden
    International Research Laboratory of Computer Technologies, ITMO University, 197101 St. Petersburg, Russia)

Abstract

Reinforcement learning has emerged as a potentially disruptive technology for control and optimization of HVAC systems. A reinforcement learning agent takes actions, which can be direct HVAC actuator commands or setpoints for control loops in building automation systems. The actions are taken to optimize one or more targets, such as indoor air quality, energy consumption and energy cost. The agent receives feedback from the HVAC systems to quantify how well these targets have been achieved. The feedback is captured by a reward function designed by the developer of the reinforcement learning agent. A few reviews have focused on the reward aspect of reinforcement learning applications for HVAC. However, there is a lack of reviews that assess how the actions of the reinforcement learning agent have been formulated, and how this impacts the possibilities to achieve various optimization targets in single zone or multi-zone buildings. The aim of this review is to identify the action formulations in the literature and to assess how the choice of formulation impacts the level of abstraction at which the HVAC systems are considered. Our methodology involves a search string in the Web of Science database and a list of selection criteria applied to each article in the search results. For each selected article, a three-tier categorization of the selected articles has been performed. Firstly, the applicability of the approach to buildings with one or more zones is considered. Secondly, the articles are categorized by the type of action taken by the agent, such as a binary, discrete or continuous action. Thirdly, the articles are categorized by the aspects of the indoor environment being controlled, namely temperature, humidity or air quality. The main result of the review is this three-tier categorization that reveals the community’s emphasis on specific HVAC applications, as well as the readiness to interface the reinforcement learning solutions to HVAC systems. The article concludes with a discussion of trends in the field as well as challenges that require further research.

Suggested Citation

  • Seppo Sierla & Heikki Ihasalo & Valeriy Vyatkin, 2022. "A Review of Reinforcement Learning Applications to Control of Heating, Ventilation and Air Conditioning Systems," Energies, MDPI, vol. 15(10), pages 1-25, May.
  • Handle: RePEc:gam:jeners:v:15:y:2022:i:10:p:3526-:d:813404
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1996-1073/15/10/3526/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1996-1073/15/10/3526/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Yassine Chemingui & Adel Gastli & Omar Ellabban, 2020. "Reinforcement Learning-Based School Energy Management System," Energies, MDPI, vol. 13(23), pages 1-21, December.
    2. Kazmi, Hussain & Suykens, Johan & Balint, Attila & Driesen, Johan, 2019. "Multi-agent reinforcement learning for modeling and control of thermostatically controlled loads," Applied Energy, Elsevier, vol. 238(C), pages 1022-1035.
    3. Yang, Lei & Nagy, Zoltan & Goffin, Philippe & Schlueter, Arno, 2015. "Reinforcement learning for optimal control of low exergy buildings," Applied Energy, Elsevier, vol. 156(C), pages 577-586.
    4. Alberto Carotenuto & Francesca Ceglia & Elisa Marrasso & Maurizio Sasso & Laura Vanoli, 2021. "Exergoeconomic Optimization of Polymeric Heat Exchangers for Geothermal Direct Applications," Energies, MDPI, vol. 14(21), pages 1-20, October.
    5. Lork, Clement & Li, Wen-Tai & Qin, Yan & Zhou, Yuren & Yuen, Chau & Tushar, Wayes & Saha, Tapan K., 2020. "An uncertainty-aware deep reinforcement learning framework for residential air conditioning energy management," Applied Energy, Elsevier, vol. 276(C).
    6. Francesca Ceglia & Adriano Macaluso & Elisa Marrasso & Carlo Roselli & Laura Vanoli, 2020. "Energy, Environmental, and Economic Analyses of Geothermal Polygeneration System Using Dynamic Simulations," Energies, MDPI, vol. 13(18), pages 1-34, September.
    7. Ma, Nan & Aviv, Dorit & Guo, Hongshan & Braham, William W., 2021. "Measuring the right factors: A review of variables and models for thermal comfort and indoor air quality," Renewable and Sustainable Energy Reviews, Elsevier, vol. 135(C).
    8. Wang, Xuan & Wang, Rui & Jin, Ming & Shu, Gequn & Tian, Hua & Pan, Jiaying, 2020. "Control of superheat of organic Rankine cycle under transient heat source based on deep reinforcement learning," Applied Energy, Elsevier, vol. 278(C).
    9. Lu, Renzhi & Hong, Seung Ho, 2019. "Incentive-based demand response for smart grid with reinforcement learning and deep neural network," Applied Energy, Elsevier, vol. 236(C), pages 937-949.
    10. Haji Hosseinloo, Ashkan & Ryzhov, Alexander & Bischi, Aldo & Ouerdane, Henni & Turitsyn, Konstantin & Dahleh, Munther A., 2020. "Data-driven control of micro-climate in buildings: An event-triggered reinforcement learning approach," Applied Energy, Elsevier, vol. 277(C).
    11. Lee, Zachary E. & Zhang, K. Max, 2021. "Generalized reinforcement learning for building control using Behavioral Cloning," Applied Energy, Elsevier, vol. 304(C).
    12. Biemann, Marco & Scheller, Fabian & Liu, Xiufeng & Huang, Lizhen, 2021. "Experimental evaluation of model-free reinforcement learning algorithms for continuous HVAC control," Applied Energy, Elsevier, vol. 298(C).
    13. Kazmi, Hussain & Mehmood, Fahad & Lodeweyckx, Stefan & Driesen, Johan, 2018. "Gigawatt-hour scale savings on a budget of zero: Deep reinforcement learning based optimal control of hot water systems," Energy, Elsevier, vol. 144(C), pages 159-168.
    14. Wen, Lulu & Zhou, Kaile & Li, Jun & Wang, Shanyong, 2020. "Modified deep learning and reinforcement learning for an incentive-based demand response model," Energy, Elsevier, vol. 205(C).
    15. Pinto, Giuseppe & Deltetto, Davide & Capozzoli, Alfonso, 2021. "Data-driven district energy management with surrogate models and deep reinforcement learning," Applied Energy, Elsevier, vol. 304(C).
    16. Ting Hu & Zhikun Ding, 2021. "An Integrated Prediction Model for Building Energy Consumption: A Case Study," Springer Books, in: Gui Ye & Hongping Yuan & Jian Zuo (ed.), Proceedings of the 24th International Symposium on Advancement of Construction Management and Real Estate, pages 1655-1665, Springer.
    17. Dong, Bing & Liu, Yapan & Fontenot, Hannah & Ouf, Mohamed & Osman, Mohamed & Chong, Adrian & Qin, Shuxu & Salim, Flora & Xue, Hao & Yan, Da & Jin, Yuan & Han, Mengjie & Zhang, Xingxing & Azar, Elie & , 2021. "Occupant behavior modeling methods for resilient building design, operation and policy at urban scale: A review," Applied Energy, Elsevier, vol. 293(C).
    18. Ding, Zhikun & Chen, Weilin & Hu, Ting & Xu, Xiaoxiao, 2021. "Evolutionary double attention-based long short-term memory model for building energy prediction: Case study of a green building," Applied Energy, Elsevier, vol. 288(C).
    19. Ce Chi & Kaixuan Ji & Penglei Song & Avinab Marahatta & Shikui Zhang & Fa Zhang & Dehui Qiu & Zhiyong Liu, 2021. "Cooperatively Improving Data Center Energy Efficiency Based on Multi-Agent Deep Reinforcement Learning," Energies, MDPI, vol. 14(8), pages 1-32, April.
    20. Yang, Ting & Zhao, Liyuan & Li, Wei & Wu, Jianzhong & Zomaya, Albert Y., 2021. "Towards healthy and cost-effective indoor environment management in smart homes: A deep reinforcement learning approach," Applied Energy, Elsevier, vol. 300(C).
    21. Davide Coraci & Silvio Brandi & Marco Savino Piscitelli & Alfonso Capozzoli, 2021. "Online Implementation of a Soft Actor-Critic Agent to Enhance Indoor Temperature Control and Energy Efficiency in Buildings," Energies, MDPI, vol. 14(4), pages 1-26, February.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Dimitrios Vamvakas & Panagiotis Michailidis & Christos Korkas & Elias Kosmatopoulos, 2023. "Review and Evaluation of Reinforcement Learning Frameworks on Smart Grid Applications," Energies, MDPI, vol. 16(14), pages 1-38, July.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    2. Ayas Shaqour & Aya Hagishima, 2022. "Systematic Review on Deep Reinforcement Learning-Based Energy Management for Different Building Types," Energies, MDPI, vol. 15(22), pages 1-27, November.
    3. Gao, Yuan & Matsunami, Yuki & Miyata, Shohei & Akashi, Yasunori, 2022. "Multi-agent reinforcement learning dealing with hybrid action spaces: A case study for off-grid oriented renewable building energy system," Applied Energy, Elsevier, vol. 326(C).
    4. Pinto, Giuseppe & Kathirgamanathan, Anjukan & Mangina, Eleni & Finn, Donal P. & Capozzoli, Alfonso, 2022. "Enhancing energy management in grid-interactive buildings: A comparison among cooperative and coordinated architectures," Applied Energy, Elsevier, vol. 310(C).
    5. Shen, Rendong & Zhong, Shengyuan & Wen, Xin & An, Qingsong & Zheng, Ruifan & Li, Yang & Zhao, Jun, 2022. "Multi-agent deep reinforcement learning optimization framework for building energy system with renewable energy," Applied Energy, Elsevier, vol. 312(C).
    6. Correa-Jullian, Camila & López Droguett, Enrique & Cardemil, José Miguel, 2020. "Operation scheduling in a solar thermal system: A reinforcement learning-based framework," Applied Energy, Elsevier, vol. 268(C).
    7. Pinto, Giuseppe & Deltetto, Davide & Capozzoli, Alfonso, 2021. "Data-driven district energy management with surrogate models and deep reinforcement learning," Applied Energy, Elsevier, vol. 304(C).
    8. Song, Yuguang & Xia, Mingchao & Chen, Qifang & Chen, Fangjian, 2023. "A data-model fusion dispatch strategy for the building energy flexibility based on the digital twin," Applied Energy, Elsevier, vol. 332(C).
    9. Wang, Zhe & Hong, Tianzhen, 2020. "Reinforcement learning for building controls: The opportunities and challenges," Applied Energy, Elsevier, vol. 269(C).
    10. Heidari, Amirreza & Maréchal, François & Khovalyg, Dolaana, 2022. "An occupant-centric control framework for balancing comfort, energy use and hygiene in hot water systems: A model-free reinforcement learning approach," Applied Energy, Elsevier, vol. 312(C).
    11. Perera, A.T.D. & Kamalaruban, Parameswaran, 2021. "Applications of reinforcement learning in energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 137(C).
    12. Coraci, Davide & Brandi, Silvio & Hong, Tianzhen & Capozzoli, Alfonso, 2023. "Online transfer learning strategy for enhancing the scalability and deployment of deep reinforcement learning control in smart buildings," Applied Energy, Elsevier, vol. 333(C).
    13. Haji Hosseinloo, Ashkan & Ryzhov, Alexander & Bischi, Aldo & Ouerdane, Henni & Turitsyn, Konstantin & Dahleh, Munther A., 2020. "Data-driven control of micro-climate in buildings: An event-triggered reinforcement learning approach," Applied Energy, Elsevier, vol. 277(C).
    14. Silvestri, Alberto & Coraci, Davide & Brandi, Silvio & Capozzoli, Alfonso & Borkowski, Esther & Köhler, Johannes & Wu, Duan & Zeilinger, Melanie N. & Schlueter, Arno, 2024. "Real building implementation of a deep reinforcement learning controller to enhance energy efficiency and indoor temperature control," Applied Energy, Elsevier, vol. 368(C).
    15. Panagiotis Michailidis & Iakovos Michailidis & Dimitrios Vamvakas & Elias Kosmatopoulos, 2023. "Model-Free HVAC Control in Buildings: A Review," Energies, MDPI, vol. 16(20), pages 1-45, October.
    16. Pinto, Giuseppe & Piscitelli, Marco Savino & Vázquez-Canteli, José Ramón & Nagy, Zoltán & Capozzoli, Alfonso, 2021. "Coordinated energy management for a cluster of buildings through deep reinforcement learning," Energy, Elsevier, vol. 229(C).
    17. Charalampos Rafail Lazaridis & Iakovos Michailidis & Georgios Karatzinis & Panagiotis Michailidis & Elias Kosmatopoulos, 2024. "Evaluating Reinforcement Learning Algorithms in Residential Energy Saving and Comfort Management," Energies, MDPI, vol. 17(3), pages 1-33, January.
    18. Seongwoo Lee & Joonho Seon & Byungsun Hwang & Soohyun Kim & Youngghyu Sun & Jinyoung Kim, 2024. "Recent Trends and Issues of Energy Management Systems Using Machine Learning," Energies, MDPI, vol. 17(3), pages 1-24, January.
    19. Dalia Mohammed Talat Ebrahim Ali & Violeta Motuzienė & Rasa Džiugaitė-Tumėnienė, 2024. "AI-Driven Innovations in Building Energy Management Systems: A Review of Potential Applications and Energy Savings," Energies, MDPI, vol. 17(17), pages 1-35, August.
    20. Zhao, Liyuan & Yang, Ting & Li, Wei & Zomaya, Albert Y., 2022. "Deep reinforcement learning-based joint load scheduling for household multi-energy system," Applied Energy, Elsevier, vol. 324(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jeners:v:15:y:2022:i:10:p:3526-:d:813404. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.