Building demand response control through constrained reinforcement learning with linear policies
Author
Abstract
Suggested Citation
DOI: 10.1016/j.apenergy.2025.126404
Download full text from publisher
As the access to this document is restricted, you may want to
for a different version of it.References listed on IDEAS
- Kuldeep Kurte & Jeffrey Munk & Olivera Kotevska & Kadir Amasyali & Robert Smith & Evan McKee & Yan Du & Borui Cui & Teja Kuruganti & Helia Zandi, 2020. "Evaluating the Adaptability of Reinforcement Learning Based HVAC Control for Residential Houses," Sustainability, MDPI, vol. 12(18), pages 1-38, September.
- Richard D. Smallwood & Edward J. Sondik, 1973. "The Optimal Control of Partially Observable Markov Processes over a Finite Horizon," Operations Research, INFORMS, vol. 21(5), pages 1071-1088, October.
- Du, Yan & Zandi, Helia & Kotevska, Olivera & Kurte, Kuldeep & Munk, Jeffery & Amasyali, Kadir & Mckee, Evan & Li, Fangxing, 2021. "Intelligent multi-zone residential HVAC control strategy based on deep reinforcement learning," Applied Energy, Elsevier, vol. 281(C).
Most related items
These are the items that most often cite the same works as this one and are cited by the same works as this one.- Blad, C. & Bøgh, S. & Kallesøe, C. & Raftery, Paul, 2023. "A laboratory test of an Offline-trained Multi-Agent Reinforcement Learning Algorithm for Heating Systems," Applied Energy, Elsevier, vol. 337(C).
- Panagiotis Michailidis & Iakovos Michailidis & Dimitrios Vamvakas & Elias Kosmatopoulos, 2023. "Model-Free HVAC Control in Buildings: A Review," Energies, MDPI, vol. 16(20), pages 1-45, October.
- Li, Yanjie & Yin, Baoqun & Xi, Hongsheng, 2011. "Finding optimal memoryless policies of POMDPs under the expected average reward criterion," European Journal of Operational Research, Elsevier, vol. 211(3), pages 556-567, June.
- Yanling Chang & Alan Erera & Chelsea White, 2015. "Value of information for a leader–follower partially observed Markov game," Annals of Operations Research, Springer, vol. 235(1), pages 129-153, December.
- Zeyue Sun & Mohsen Eskandari & Chaoran Zheng & Ming Li, 2022. "Handling Computation Hardness and Time Complexity Issue of Battery Energy Storage Scheduling in Microgrids by Deep Reinforcement Learning," Energies, MDPI, vol. 16(1), pages 1-20, December.
- Zhang, Bin & Hu, Weihao & Ghias, Amer M.Y.M. & Xu, Xiao & Chen, Zhe, 2022. "Multi-agent deep reinforcement learning-based coordination control for grid-aware multi-buildings," Applied Energy, Elsevier, vol. 328(C).
- M. Usman Saleem & Mustafa Shakir & M. Rehan Usman & M. Hamza Tahir Bajwa & Noman Shabbir & Payam Shams Ghahfarokhi & Kamran Daniel, 2023. "Integrating Smart Energy Management System with Internet of Things and Cloud Computing for Efficient Demand Side Management in Smart Grids," Energies, MDPI, vol. 16(12), pages 1-21, June.
- Chiel van Oosterom & Lisa M. Maillart & Jeffrey P. Kharoufeh, 2017. "Optimal maintenance policies for a safety‐critical system and its deteriorating sensor," Naval Research Logistics (NRL), John Wiley & Sons, vol. 64(5), pages 399-417, August.
- Malek Ebadi & Raha Akhavan-Tabatabaei, 2021. "Personalized Cotesting Policies for Cervical Cancer Screening: A POMDP Approach," Mathematics, MDPI, vol. 9(6), pages 1-20, March.
- N. Bora Keskin & John R. Birge, 2019. "Dynamic Selling Mechanisms for Product Differentiation and Learning," Operations Research, INFORMS, vol. 67(4), pages 1069-1089, July.
- Junbo Son & Yeongin Kim & Shiyu Zhou, 2022. "Alerting patients via health information system considering trust-dependent patient adherence," Information Technology and Management, Springer, vol. 23(4), pages 245-269, December.
- Jonghoon Ahn, 2020. "Improvement of the Performance Balance between Thermal Comfort and Energy Use for a Building Space in the Mid-Spring Season," Sustainability, MDPI, vol. 12(22), pages 1-14, November.
- Guo, Yuxiang & Qu, Shengli & Wang, Chuang & Xing, Ziwen & Duan, Kaiwen, 2024. "Optimal dynamic thermal management for data center via soft actor-critic algorithm with dynamic control interval and combined-value state space," Applied Energy, Elsevier, vol. 373(C).
- Hao Zhang, 2010. "Partially Observable Markov Decision Processes: A Geometric Technique and Analysis," Operations Research, INFORMS, vol. 58(1), pages 214-228, February.
- Wang, Qiaochu & Ding, Yan & Kong, Xiangfei & Tian, Zhe & Xu, Linrui & He, Qing, 2022. "Load pattern recognition based optimization method for energy flexibility in office buildings," Energy, Elsevier, vol. 254(PC).
- Chernonog, Tatyana & Avinadav, Tal, 2016. "A two-state partially observable Markov decision process with three actionsAuthor-Name: Ben-Zvi, Tal," European Journal of Operational Research, Elsevier, vol. 254(3), pages 957-967.
- Martin Mundhenk, 2000. "The Complexity of Optimal Small Policies," Mathematics of Operations Research, INFORMS, vol. 25(1), pages 118-129, February.
- Hernandez-Matheus, Alejandro & Löschenbrand, Markus & Berg, Kjersti & Fuchs, Ida & Aragüés-Peñalba, Mònica & Bullich-Massagué, Eduard & Sumper, Andreas, 2022. "A systematic review of machine learning techniques related to local energy communities," Renewable and Sustainable Energy Reviews, Elsevier, vol. 170(C).
- Zhao, Liyuan & Yang, Ting & Li, Wei & Zomaya, Albert Y., 2022. "Deep reinforcement learning-based joint load scheduling for household multi-energy system," Applied Energy, Elsevier, vol. 324(C).
- Zheng, Lingwei & Wu, Hao & Guo, Siqi & Sun, Xinyu, 2023. "Real-time dispatch of an integrated energy system based on multi-stage reinforcement learning with an improved action-choosing strategy," Energy, Elsevier, vol. 277(C).
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:398:y:2025:i:c:s0306261925011341. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .
Please note that corrections may take a couple of weeks to filter through the various RePEc services.
Printed from https://ideas.repec.org/a/eee/appene/v398y2025ics0306261925011341.html