Author
Listed:
- Xiangrui Wang
(College of Civil Engineering, Hunan University, Changsha 410082, China)
- Qilin Zhang
(College of Civil Engineering, Hunan University, Changsha 410082, China)
- Zhihua Chen
(Department of Building Science, School of Architecture, Tsinghua University, Beijing 100084, China)
- Jingjing Yang
(College of Civil Engineering, Hunan University, Changsha 410082, China)
- Yixing Chen
(College of Civil Engineering, Hunan University, Changsha 410082, China
Key Laboratory of Building Safety and Energy Efficiency of Ministry of Education, Hunan University, Changsha 410082, China)
Abstract
To face the global energy crisis, the requirement of energy transition and sustainable development has emphasized the importance of controlling building energy management systems. Reinforcement learning (RL) has shown notable energy-saving potential in the optimal control of heating, ventilation, and air-conditioning (HVAC) systems. However, the coupling of the algorithms and environments limits the cross-scenario application. This paper develops chiller plant models in OpenAI Gym environments to evaluate different RL algorithms for optimizing condenser water loop control. A shopping mall in Changsha, China, was selected as the case study building. First, an energy simulation model in EnergyPlus was generated using AutoBPS. Then, the OpenAI Gym chiller plant system model was developed and validated by comparing it with the EnergyPlus simulation results. Moreover, two RL algorithms, Deep-Q-Network (DQN) and Double Deep-Q-Network (DDQN), were deployed to control the condenser water flow rate and approach temperature of cooling towers in the RL environment. Finally, the optimization performance of DQN across three climate zones was evaluated using the AutoBPS-Gym toolkit. The findings indicated that during the cooling season in a shopping mall in Changsha, the DQN control method resulted in energy savings of 14.16% for the cooling water system, whereas the DDQN method achieved savings of 14.01%. Using the average control values from DQN, the EnergyPlus simulation recorded an energy-saving rate of 10.42% compared to the baseline. Furthermore, implementing the DQN algorithm across three different climatic zones led to an average energy savings of 4.0%, highlighting the toolkit’s ability to effectively utilize RL for optimal control in various environmental contexts.
Suggested Citation
Xiangrui Wang & Qilin Zhang & Zhihua Chen & Jingjing Yang & Yixing Chen, 2025.
"Development of Chiller Plant Models in OpenAI Gym Environment for Evaluating Reinforcement Learning Algorithms,"
Energies, MDPI, vol. 18(9), pages 1-28, April.
Handle:
RePEc:gam:jeners:v:18:y:2025:i:9:p:2225-:d:1643977
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jeners:v:18:y:2025:i:9:p:2225-:d:1643977. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.