IDEAS home Printed from https://ideas.repec.org/a/eee/appene/v402y2026ipbs0306261925017271.html

Real-world deployment of model-free reinforcement learning for energy control in district heating systems: Enhancing flexibility across neighboring buildings

Author

Listed:
  • Moshari, Amirhosein
  • Javanroodi, Kavan
  • Nik, Vahid M.

Abstract

Energy Management Systems (EMSs) often operate through inflexible, rule-based control systems. Model-free reinforcement learning (RL) has emerged as a promising alternative, providing adaptive and autonomous control without the need for detailed modeling. However, the complexity of operating energy systems and dynamic environmental conditions limits their practical use, with most studies relying on simulations or short-term trials that fall short of real-world deployment. This research presents a prolonged, multi-building implementation of a complete, autonomous, model-free RL system that is developed and deployed within operational buildings throughout a full heating season. The system operated successfully without the need for simulation, pre-training, calibration, or additional sensor requirements, and, via flexibility signals, it facilitated a more efficient and robust deployment, while also addressing data privacy concerns. The analysis provided significant results, including a 7.9% decrease from the previous warmer year and a 29.7% reduction in heating energy compared to a multi-year historical baseline. The optimal performance of the implemented system became evident through a 3.85 °C reduction in return temperature and consistent reductions in peak demand, while maintaining occupant comfort. By combining ANCOVA normalization, matched-temperature baselines, thermodynamic efficiency metrics, and comfort analyses, this study introduces a structured framework and evaluation metrics to support robust field assessments of RL-based control in buildings. Finally, the study highlights the strong potential of autonomous RL engines for broader implementation in complex real-world energy management systems, as they require no knowledge of system dynamics or infrastructure upgrades, offering a reliable and cost-effective solution.

Suggested Citation

  • Moshari, Amirhosein & Javanroodi, Kavan & Nik, Vahid M., 2026. "Real-world deployment of model-free reinforcement learning for energy control in district heating systems: Enhancing flexibility across neighboring buildings," Applied Energy, Elsevier, vol. 402(PB).
  • Handle: RePEc:eee:appene:v:402:y:2026:i:pb:s0306261925017271
    DOI: 10.1016/j.apenergy.2025.126997
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0306261925017271
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.apenergy.2025.126997?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to

    for a different version of it.

    References listed on IDEAS

    as
    1. Kuldeep Kurte & Jeffrey Munk & Olivera Kotevska & Kadir Amasyali & Robert Smith & Evan McKee & Yan Du & Borui Cui & Teja Kuruganti & Helia Zandi, 2020. "Evaluating the Adaptability of Reinforcement Learning Based HVAC Control for Residential Houses," Sustainability, MDPI, vol. 12(18), pages 1-38, September.
    2. Elinor Ginzburg-Ganz & Itay Segev & Alexander Balabanov & Elior Segev & Sivan Kaully Naveh & Ram Machlev & Juri Belikov & Liran Katzir & Sarah Keren & Yoash Levron, 2024. "Reinforcement Learning Model-Based and Model-Free Paradigms for Optimal Control Problems in Power Systems: Comprehensive Review and Future Directions," Energies, MDPI, vol. 17(21), pages 1-54, October.
    3. Nik, Vahid M. & Hosseini, Mohammad, 2023. "CIRLEM: a synergic integration of Collective Intelligence and Reinforcement learning in Energy Management for enhanced climate resilience and lightweight computation," Applied Energy, Elsevier, vol. 350(C).
    4. Kim, Hyung Joon & Lee, Jae Yong & Tak, Hyunwoo & Kim, Dongwoo, 2025. "Deep reinforcement learning-based residential building energy management incorporating power-to-heat technology for building electrification," Energy, Elsevier, vol. 317(C).
    5. Panagiotis Michailidis & Iakovos Michailidis & Dimitrios Vamvakas & Elias Kosmatopoulos, 2023. "Model-Free HVAC Control in Buildings: A Review," Energies, MDPI, vol. 16(20), pages 1-45, October.
    6. Li, Yanxue & Wang, Zixuan & Xu, Wenya & Gao, Weijun & Xu, Yang & Xiao, Fu, 2023. "Modeling and energy dynamic control for a ZEH via hybrid model-based deep reinforcement learning," Energy, Elsevier, vol. 277(C).
    7. Perera, A.T.D. & Javanroodi, Kavan & Nik, Vahid M., 2021. "Climate resilient interconnected infrastructure: Co-optimization of energy systems and urban morphology," Applied Energy, Elsevier, vol. 285(C).
    8. Kazmi, Hussain & Mehmood, Fahad & Lodeweyckx, Stefan & Driesen, Johan, 2018. "Gigawatt-hour scale savings on a budget of zero: Deep reinforcement learning based optimal control of hot water systems," Energy, Elsevier, vol. 144(C), pages 159-168.
    9. A. T. D. Perera & Kavan Javanroodi & Dasaraden Mauree & Vahid M. Nik & Pietro Florio & Tianzhen Hong & Deliang Chen, 2023. "Challenges resulting from urban density and climate change for the EU energy transition," Nature Energy, Nature, vol. 8(4), pages 397-412, April.
    10. H. Kazmi & Fahad Mehmood & S. Lodeweyckx & J. Driesen, 2018. "Gigawatt-Hour Scale Savings on a Budget of Zero: Deep Reinforcement Learning Based Optimal Control of Hot Water Systems," Post-Print hal-04317815, HAL.
    11. Halhoul Merabet, Ghezlane & Essaaidi, Mohamed & Ben Haddou, Mohamed & Qolomany, Basheer & Qadir, Junaid & Anan, Muhammad & Al-Fuqaha, Ala & Abid, Mohamed Riduan & Benhaddou, Driss, 2021. "Intelligent building control systems for thermal comfort and energy-efficiency: A systematic review of artificial intelligence-assisted techniques," Renewable and Sustainable Energy Reviews, Elsevier, vol. 144(C).
    12. Silvestri, Alberto & Coraci, Davide & Brandi, Silvio & Capozzoli, Alfonso & Borkowski, Esther & Köhler, Johannes & Wu, Duan & Zeilinger, Melanie N. & Schlueter, Arno, 2024. "Real building implementation of a deep reinforcement learning controller to enhance energy efficiency and indoor temperature control," Applied Energy, Elsevier, vol. 368(C).
    13. Xiaoyang Zhong & Mingming Hu & Sebastiaan Deetman & Bernhard Steubing & Hai Xiang Lin & Glenn Aguilar Hernandez & Carina Harpprecht & Chunbo Zhang & Arnold Tukker & Paul Behrens, 2021. "Global greenhouse gas emissions from residential and commercial building materials and mitigation strategies to 2060," Nature Communications, Nature, vol. 12(1), pages 1-10, December.
    14. Ayas Shaqour & Aya Hagishima, 2022. "Systematic Review on Deep Reinforcement Learning-Based Energy Management for Different Building Types," Energies, MDPI, vol. 15(22), pages 1-27, November.
    15. Rik Heerden & Oreane Y. Edelenbosch & Vassilis Daioglou & Thomas Gallic & Luiz Bernardo Baptista & Alice Bella & Francesco Pietro Colelli & Johannes Emmerling & Panagiotis Fragkos & Robin Hasse & Joha, 2025. "Demand-side strategies enable rapid and deep cuts in buildings and transport emissions to 2050," Nature Energy, Nature, vol. 10(3), pages 380-394, March.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Razzano, Giuseppe & Brandi, Silvio & Piscitelli, Marco Savino & Capozzoli, Alfonso, 2025. "Rule extraction from deep reinforcement learning controller and comparative analysis with ASHRAE control sequences for the optimal management of Heating, Ventilation, and Air Conditioning (HVAC) systems in multizone buildings," Applied Energy, Elsevier, vol. 381(C).
    2. Kahil, Hussain & Sharma, Shiva & Välisuo, Petri & Elmusrati, Mohammed, 2025. "Reinforcement learning for data center energy efficiency optimization: A systematic literature review and research roadmap," Applied Energy, Elsevier, vol. 389(C).
    3. Liu, Pengxiang & Wu, Zhi & Zhang, Zijun & Gu, Wei & Sun, Qirun & Qiu, Haifeng, 2024. "Exploiting geospatial shifting flexibility of building energy use for urban multi-energy system operation," Energy, Elsevier, vol. 313(C).
    4. Mooyoung Yoo, 2024. "Development of Energy Efficient Domestic Hot Water Loop System Integrated with a Chilled Water Plant in Commercial Building," Sustainability, MDPI, vol. 17(1), pages 1-16, December.
    5. Zhou, Xinlei & Du, Han & Xue, Shan & Ma, Zhenjun, 2024. "Recent advances in data mining and machine learning for enhanced building energy management," Energy, Elsevier, vol. 307(C).
    6. Chen, Zhe & Xing, Tian & Wang, Yu & Zhuang, Yunlin & Zheng, Meng & Zhao, Qianchuan & Jia, Qing-Shan, 2025. "Coupling time-scale reinforcement learning methods for building operational optimization with waste heat," Applied Energy, Elsevier, vol. 391(C).
    7. Heidari, Amirreza & Girardin, Luc & Dorsaz, Cédric & Maréchal, François, 2025. "A trustworthy reinforcement learning framework for autonomous control of a large-scale complex heating system: Simulation and field implementation," Applied Energy, Elsevier, vol. 378(PA).
    8. Kaabinejadian, Amirreza & Pozarlik, Artur & Acar, Canan, 2025. "A systematic review of predictive, optimization, and smart control strategies for hydrogen-based building heating systems," Applied Energy, Elsevier, vol. 379(C).
    9. Nik, Vahid M. & Hosseini, Mohammad, 2023. "CIRLEM: a synergic integration of Collective Intelligence and Reinforcement learning in Energy Management for enhanced climate resilience and lightweight computation," Applied Energy, Elsevier, vol. 350(C).
    10. Liu, Shuli & Han, Junrui & Shen, Yongliang & Khan, Sheher Yar & Ji, Wenjie & Jin, Haibo & Kumar, Mahesh, 2025. "The contribution of artificial intelligence to phase change materials in thermal energy storage: From prediction to optimization," Renewable Energy, Elsevier, vol. 238(C).
    11. Razzaq, Asif & Sharif, Arshian & Ozturk, Ilhan & Skare, Marinko, 2022. "Inclusive infrastructure development, green innovation, and sustainable resource management: Evidence from China’s trade-adjusted material footprints," Resources Policy, Elsevier, vol. 79(C).
    12. Tsai, I-Chun, 2024. "A wise investment by urban governments: Evidence from intelligent sports facilities," Journal of Asian Economics, Elsevier, vol. 92(C).
    13. Ren, Yujie & Zhu, Hao & Fan, Tianhui, 2026. "Structure-sensitive carbon emission mechanisms in urban morphology: Grad-CAM guided nonlinear modeling and attention-based sectoral diagnosis," Applied Energy, Elsevier, vol. 402(PC).
    14. Zhou, Yizhou & Li, Xiang & Han, Haiteng & Wei, Zhinong & Zang, Haixiang & Sun, Guoqiang & Chen, Sheng, 2024. "Resilience-oriented planning of integrated electricity and heat systems: A stochastic distributionally robust optimization approach," Applied Energy, Elsevier, vol. 353(PA).
    15. Liu, Mingzhe & Guo, Mingyue & Fu, Yangyang & O’Neill, Zheng & Gao, Yuan, 2024. "Expert-guided imitation learning for energy management: Evaluating GAIL’s performance in building control applications," Applied Energy, Elsevier, vol. 372(C).
    16. Correa-Jullian, Camila & López Droguett, Enrique & Cardemil, José Miguel, 2020. "Operation scheduling in a solar thermal system: A reinforcement learning-based framework," Applied Energy, Elsevier, vol. 268(C).
    17. Jin, Jiahui & Sun, Guoqiang & Chen, Sheng & Li, Yaping & Zhu, Hong & Mao, Wenbo & Ji, Wenlu, 2026. "Buildings-to-grid with generalized energy storage: A multi-agent decomposed deep reinforcement learning approach for delayed rewards," Applied Energy, Elsevier, vol. 404(C).
    18. Pinto, Giuseppe & Piscitelli, Marco Savino & Vázquez-Canteli, José Ramón & Nagy, Zoltán & Capozzoli, Alfonso, 2021. "Coordinated energy management for a cluster of buildings through deep reinforcement learning," Energy, Elsevier, vol. 229(C).
    19. Tirulo, Aschalew & Yadav, Monika & Lolamo, Mathewos & Chauhan, Siddhartha & Siano, Pierluigi & Shafie-khah, Miadreza, 2026. "Beyond automation: Unveiling the potential of agentic intelligence," Renewable and Sustainable Energy Reviews, Elsevier, vol. 226(PA).
    20. Dai, Mingkun & Li, Hangxin & Wang, Shengwei, 2023. "A reinforcement learning-enabled iterative learning control strategy of air-conditioning systems for building energy saving by shortening the morning start period," Applied Energy, Elsevier, vol. 334(C).

    More about this item

    Keywords

    ;
    ;
    ;
    ;
    ;
    ;

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:402:y:2026:i:pb:s0306261925017271. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.