IDEAS home Printed from https://ideas.repec.org/a/eee/appene/v309y2022ics0306261921015932.html
   My bibliography  Save this article

Reinforced model predictive control (RL-MPC) for building energy management

Author

Listed:
  • Arroyo, Javier
  • Manna, Carlo
  • Spiessens, Fred
  • Helsen, Lieve

Abstract

Buildings need advanced control for the efficient and climate-neutral use of their energy systems. Model predictive control (MPC) and reinforcement learning (RL) arise as two powerful control techniques that have been extensively investigated in the literature for their application to building energy management. These methods show complementary qualities in terms of constraint satisfaction, computational demand, adaptability, and intelligibility, but usually a choice is made between both approaches. This paper compares both control approaches and proposes a novel algorithm called reinforced predictive control (RL-MPC) that merges their relative merits. First, the complementarity between RL and MPC is emphasized on a conceptual level by commenting on the main aspects of each method. Second, the RL-MPC algorithm is described that effectively combines features from each approach, namely state estimation, dynamic optimization, and learning. Finally, MPC, RL, and RL-MPC are implemented and evaluated in BOPTEST, a standardized simulation framework for the assessment of advanced control algorithms in buildings. The results indicate that pure RL cannot provide constraint satisfaction when using a control formulation equivalent to MPC and the same controller model for learning. The new RL-MPC algorithm can meet constraints and provide similar performance to MPC while enabling continuous learning and the possibility to deal with uncertain environments.

Suggested Citation

  • Arroyo, Javier & Manna, Carlo & Spiessens, Fred & Helsen, Lieve, 2022. "Reinforced model predictive control (RL-MPC) for building energy management," Applied Energy, Elsevier, vol. 309(C).
  • Handle: RePEc:eee:appene:v:309:y:2022:i:c:s0306261921015932
    DOI: 10.1016/j.apenergy.2021.118346
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0306261921015932
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.apenergy.2021.118346?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Antimo Barbato & Cristiana Bolchini & Angela Geronazzo & Elisa Quintarelli & Andrei Palamarciuc & Alessandro Pitì & Cristina Rottondi & Giacomo Verticale, 2016. "Energy Optimization and Management of Demand Response Interactions in a Smart Campus," Energies, MDPI, vol. 9(6), pages 1-20, May.
    2. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    3. Drgoňa, Ján & Picard, Damien & Kvasnica, Michal & Helsen, Lieve, 2018. "Approximate model predictive building control via machine learning," Applied Energy, Elsevier, vol. 218(C), pages 199-216.
    4. De Coninck, Roel & Helsen, Lieve, 2016. "Quantification of flexibility in buildings by cost curves – Methodology and application," Applied Energy, Elsevier, vol. 162(C), pages 653-665.
    5. Vázquez-Canteli, José R. & Nagy, Zoltán, 2019. "Reinforcement learning for demand response: A review of algorithms and modeling techniques," Applied Energy, Elsevier, vol. 235(C), pages 1072-1089.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Gao, Yuan & Matsunami, Yuki & Miyata, Shohei & Akashi, Yasunori, 2022. "Multi-agent reinforcement learning dealing with hybrid action spaces: A case study for off-grid oriented renewable building energy system," Applied Energy, Elsevier, vol. 326(C).
    2. Gao, Yuan & Miyata, Shohei & Akashi, Yasunori, 2023. "Energy saving and indoor temperature control for an office building using tube-based robust model predictive control," Applied Energy, Elsevier, vol. 341(C).
    3. Lei, Yue & Zhan, Sicheng & Ono, Eikichi & Peng, Yuzhen & Zhang, Zhiang & Hasama, Takamasa & Chong, Adrian, 2022. "A practical deep reinforcement learning framework for multivariate occupant-centric control in buildings," Applied Energy, Elsevier, vol. 324(C).
    4. Ayas Shaqour & Aya Hagishima, 2022. "Systematic Review on Deep Reinforcement Learning-Based Energy Management for Different Building Types," Energies, MDPI, vol. 15(22), pages 1-27, November.
    5. Fu, Yangyang & Xu, Shichao & Zhu, Qi & O’Neill, Zheng & Adetola, Veronica, 2023. "How good are learning-based control v.s. model-based control for load shifting? Investigations on a single zone building energy system," Energy, Elsevier, vol. 273(C).
    6. Joanna Piotrowska-Woroniak & Krzysztof Cieśliński & Grzegorz Woroniak & Jonas Bielskus, 2022. "The Impact of Thermo-Modernization and Forecast Regulation on the Reduction of Thermal Energy Consumption and Reduction of Pollutant Emissions into the Atmosphere on the Example of Prefabricated Build," Energies, MDPI, vol. 15(8), pages 1-32, April.
    7. Zhou, Xinlei & Xue, Shan & Du, Han & Ma, Zhenjun, 2023. "Optimization of building demand flexibility using reinforcement learning and rule-based expert systems," Applied Energy, Elsevier, vol. 350(C).
    8. Nweye, Kingsley & Sankaranarayanan, Siva & Nagy, Zoltan, 2023. "MERLIN: Multi-agent offline and transfer learning for occupant-centric operation of grid-interactive communities," Applied Energy, Elsevier, vol. 346(C).
    9. Amal Azzi & Mohamed Tabaa & Badr Chegari & Hanaa Hachimi, 2024. "Balancing Sustainability and Comfort: A Holistic Study of Building Control Strategies That Meet the Global Standards for Efficiency and Thermal Comfort," Sustainability, MDPI, vol. 16(5), pages 1-36, March.
    10. Panagiotis Michailidis & Iakovos Michailidis & Socratis Gkelios & Elias Kosmatopoulos, 2024. "Artificial Neural Network Applications for Energy Management in Buildings: Current Trends and Future Directions," Energies, MDPI, vol. 17(3), pages 1-47, January.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Kathirgamanathan, Anjukan & De Rosa, Mattia & Mangina, Eleni & Finn, Donal P., 2021. "Data-driven predictive control for unlocking building energy flexibility: A review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 135(C).
    2. Sun, Hongchang & Niu, Yanlei & Li, Chengdong & Zhou, Changgeng & Zhai, Wenwen & Chen, Zhe & Wu, Hao & Niu, Lanqiang, 2022. "Energy consumption optimization of building air conditioning system via combining the parallel temporal convolutional neural network and adaptive opposition-learning chimp algorithm," Energy, Elsevier, vol. 259(C).
    3. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    4. Oleh Lukianykhin & Tetiana Bogodorova, 2021. "Voltage Control-Based Ancillary Service Using Deep Reinforcement Learning," Energies, MDPI, vol. 14(8), pages 1-22, April.
    5. Harrold, Daniel J.B. & Cao, Jun & Fan, Zhong, 2022. "Data-driven battery operation for energy arbitrage using rainbow deep reinforcement learning," Energy, Elsevier, vol. 238(PC).
    6. Harrold, Daniel J.B. & Cao, Jun & Fan, Zhong, 2022. "Renewable energy integration and microgrid energy trading using multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 318(C).
    7. Touzani, Samir & Prakash, Anand Krishnan & Wang, Zhe & Agarwal, Shreya & Pritoni, Marco & Kiran, Mariam & Brown, Richard & Granderson, Jessica, 2021. "Controlling distributed energy resources via deep reinforcement learning for load flexibility and energy efficiency," Applied Energy, Elsevier, vol. 304(C).
    8. Park, Keonwoo & Moon, Ilkyeong, 2022. "Multi-agent deep reinforcement learning approach for EV charging scheduling in a smart grid," Applied Energy, Elsevier, vol. 328(C).
    9. Rae-Jun Park & Kyung-Bin Song & Bo-Sung Kwon, 2020. "Short-Term Load Forecasting Algorithm Using a Similar Day Selection Method Based on Reinforcement Learning," Energies, MDPI, vol. 13(10), pages 1-19, May.
    10. Antonopoulos, Ioannis & Robu, Valentin & Couraud, Benoit & Kirli, Desen & Norbu, Sonam & Kiprakis, Aristides & Flynn, David & Elizondo-Gonzalez, Sergio & Wattam, Steve, 2020. "Artificial intelligence and machine learning approaches to energy demand-side response: A systematic review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 130(C).
    11. Christian Blad & Simon Bøgh & Carsten Kallesøe, 2021. "A Multi-Agent Reinforcement Learning Approach to Price and Comfort Optimization in HVAC-Systems," Energies, MDPI, vol. 14(22), pages 1-20, November.
    12. Esmaeili Aliabadi, Danial & Chan, Katrina, 2022. "The emerging threat of artificial intelligence on competition in liberalized electricity markets: A deep Q-network approach," Applied Energy, Elsevier, vol. 325(C).
    13. Biemann, Marco & Scheller, Fabian & Liu, Xiufeng & Huang, Lizhen, 2021. "Experimental evaluation of model-free reinforcement learning algorithms for continuous HVAC control," Applied Energy, Elsevier, vol. 298(C).
    14. Xie, Jiahan & Ajagekar, Akshay & You, Fengqi, 2023. "Multi-Agent attention-based deep reinforcement learning for demand response in grid-responsive buildings," Applied Energy, Elsevier, vol. 342(C).
    15. Qiu, Dawei & Dong, Zihang & Zhang, Xi & Wang, Yi & Strbac, Goran, 2022. "Safe reinforcement learning for real-time automatic control in a smart energy-hub," Applied Energy, Elsevier, vol. 309(C).
    16. Han, Gwangwoo & Joo, Hong-Jin & Lim, Hee-Won & An, Young-Sub & Lee, Wang-Je & Lee, Kyoung-Ho, 2023. "Data-driven heat pump operation strategy using rainbow deep reinforcement learning for significant reduction of electricity cost," Energy, Elsevier, vol. 270(C).
    17. Perera, A.T.D. & Kamalaruban, Parameswaran, 2021. "Applications of reinforcement learning in energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 137(C).
    18. Caputo, Cesare & Cardin, Michel-Alexandre & Ge, Pudong & Teng, Fei & Korre, Anna & Antonio del Rio Chanona, Ehecatl, 2023. "Design and planning of flexible mobile Micro-Grids using Deep Reinforcement Learning," Applied Energy, Elsevier, vol. 335(C).
    19. Nik, Vahid M. & Moazami, Amin, 2021. "Using collective intelligence to enhance demand flexibility and climate resilience in urban areas," Applied Energy, Elsevier, vol. 281(C).
    20. Svetozarevic, B. & Baumann, C. & Muntwiler, S. & Di Natale, L. & Zeilinger, M.N. & Heer, P., 2022. "Data-driven control of room temperature and bidirectional EV charging using deep reinforcement learning: Simulations and experiments," Applied Energy, Elsevier, vol. 307(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:309:y:2022:i:c:s0306261921015932. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.