IDEAS home Printed from https://ideas.repec.org/a/eee/appene/v313y2022ics0306261922002975.html
   My bibliography  Save this article

Dynamics analysis of a novel hybrid deep clustering for unsupervised learning by reinforcement of multi-agent to energy saving in intelligent buildings

Author

Listed:
  • Homod, Raad Z.
  • Togun, Hussein
  • Kadhim Hussein, Ahmed
  • Noraldeen Al-Mousawi, Fadhel
  • Yaseen, Zaher Mundher
  • Al-Kouz, Wael
  • Abd, Haider J.
  • Alawi, Omer A.
  • Goodarzi, Marjan
  • Hussein, Omar A.

Abstract

The heating, ventilating and air conditioning (HVAC) systems energy demand can be reduced by manipulating indoor conditions within the comfort range, which relates to control performance and, simultaneously, achieves peak load shifting toward off-peak hours. Reinforcement learning (RL) is considered a promising technique to solve this problem without an analytical approach, but it has been unable to overcome the awkwardness of an extremely large action space in the real world; it would be quite hard to converge to a set point. The core of the problem with RL is its state space and action space of multi-agent action for building and HVAC systems that have an extremely large amount of training data sets. This makes it difficult to create weights layers accurately of the black-box model. Despite the efforts of past works carried out on deep RL, there are still drawback issues that have not been dealt with as part of the basic elements of large action space and the large-scale nonlinearity due to high thermal inertia. The hybrid deep clustering of multi-agent reinforcement learning (HDCMARL) has the ability to overcome these challenges since the hybrid deep clustering approach has a higher capacity for learning the representation of large space and massive data. The framework of RL agents is a greedy iterative trained and organized as a hybrid layer clustering structure to be able to deal with a non-convex, non-linear and non-separable objective function. The parameters of the hybrid layer are optimized by using the Quasi-Newton (QN) algorithm for fast response signals of agents. That is to say, the main motivation is that the state and action space of multi-agent actions for building HVAC controls are exploding, and the proposed method can overcome this challenge and achieve 32% better performance in energy savings and 21% better performance in thermal comfort than PID.

Suggested Citation

  • Homod, Raad Z. & Togun, Hussein & Kadhim Hussein, Ahmed & Noraldeen Al-Mousawi, Fadhel & Yaseen, Zaher Mundher & Al-Kouz, Wael & Abd, Haider J. & Alawi, Omer A. & Goodarzi, Marjan & Hussein, Omar A., 2022. "Dynamics analysis of a novel hybrid deep clustering for unsupervised learning by reinforcement of multi-agent to energy saving in intelligent buildings," Applied Energy, Elsevier, vol. 313(C).
  • Handle: RePEc:eee:appene:v:313:y:2022:i:c:s0306261922002975
    DOI: 10.1016/j.apenergy.2022.118863
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0306261922002975
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.apenergy.2022.118863?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Maytham S. Ahmed & Azah Mohamed & Raad Z. Homod & Hussain Shareef, 2016. "Hybrid LSA-ANN Based Home Energy Management Scheduling Controller for Residential Demand Response Strategy," Energies, MDPI, vol. 9(9), pages 1-20, September.
    2. Touzani, Samir & Prakash, Anand Krishnan & Wang, Zhe & Agarwal, Shreya & Pritoni, Marco & Kiran, Mariam & Brown, Richard & Granderson, Jessica, 2021. "Controlling distributed energy resources via deep reinforcement learning for load flexibility and energy efficiency," Applied Energy, Elsevier, vol. 304(C).
    3. Homod, Raad Z. & Sahari, Khairul Salleh Mohamed & Almurib, Haider A.F., 2014. "Energy saving by integrated control of natural ventilation and HVAC systems using model guide for comparison," Renewable Energy, Elsevier, vol. 71(C), pages 639-650.
    4. Kou, Peng & Liang, Deliang & Wang, Chen & Wu, Zihao & Gao, Lin, 2020. "Safe deep reinforcement learning-based constrained optimal control scheme for active distribution networks," Applied Energy, Elsevier, vol. 264(C).
    5. Qiu, Dawei & Ye, Yujian & Papadaskalopoulos, Dimitrios & Strbac, Goran, 2021. "Scalable coordinated management of peer-to-peer energy trading: A multi-cluster deep reinforcement learning approach," Applied Energy, Elsevier, vol. 292(C).
    6. Wang, Zhe & Hong, Tianzhen, 2020. "Reinforcement learning for building controls: The opportunities and challenges," Applied Energy, Elsevier, vol. 269(C).
    7. Biemann, Marco & Scheller, Fabian & Liu, Xiufeng & Huang, Lizhen, 2021. "Experimental evaluation of model-free reinforcement learning algorithms for continuous HVAC control," Applied Energy, Elsevier, vol. 298(C).
    8. Svetozarevic, B. & Baumann, C. & Muntwiler, S. & Di Natale, L. & Zeilinger, M.N. & Heer, P., 2022. "Data-driven control of room temperature and bidirectional EV charging using deep reinforcement learning: Simulations and experiments," Applied Energy, Elsevier, vol. 307(C).
    9. Homod, Raad Z., 2018. "Analysis and optimization of HVAC control systems based on energy and performance considerations for smart buildings," Renewable Energy, Elsevier, vol. 126(C), pages 49-64.
    10. Dorokhova, Marina & Martinson, Yann & Ballif, Christophe & Wyrsch, Nicolas, 2021. "Deep reinforcement learning control of electric vehicle charging in the presence of photovoltaic generation," Applied Energy, Elsevier, vol. 301(C).
    11. Antonopoulos, Ioannis & Robu, Valentin & Couraud, Benoit & Kirli, Desen & Norbu, Sonam & Kiprakis, Aristides & Flynn, David & Elizondo-Gonzalez, Sergio & Wattam, Steve, 2020. "Artificial intelligence and machine learning approaches to energy demand-side response: A systematic review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 130(C).
    12. Davide Deltetto & Davide Coraci & Giuseppe Pinto & Marco Savino Piscitelli & Alfonso Capozzoli, 2021. "Exploring the Potentialities of Deep Reinforcement Learning for Incentive-Based Demand Response in a Cluster of Small Commercial Buildings," Energies, MDPI, vol. 14(10), pages 1-25, May.
    13. Homod, Raad Z. & Gaeid, Khalaf S. & Dawood, Suroor M. & Hatami, Alireza & Sahari, Khairul S., 2020. "Evaluation of energy-saving potential for optimal time response of HVAC control system in smart buildings," Applied Energy, Elsevier, vol. 271(C).
    14. Yang, Ting & Zhao, Liyuan & Li, Wei & Wu, Jianzhong & Zomaya, Albert Y., 2021. "Towards healthy and cost-effective indoor environment management in smart homes: A deep reinforcement learning approach," Applied Energy, Elsevier, vol. 300(C).
    15. Amjad Almusaed & Asaad Almssad & Raad Z. Homod & Ibrahim Yitmen, 2020. "Environmental Profile on Building Material Passports for Hot Climates," Sustainability, MDPI, vol. 12(9), pages 1-20, May.
    16. Ghosh, Soumya & Chakraborty, Tilottama & Saha, Satyabrata & Majumder, Mrinmoy & Pal, Manish, 2016. "Development of the location suitability index for wave energy production by ANN and MCDM techniques," Renewable and Sustainable Energy Reviews, Elsevier, vol. 59(C), pages 1017-1028.
    17. Homod, Raad Z., 2014. "Assessment regarding energy saving and decoupling for different AHU (air handling unit) and control strategies in the hot-humid climatic region of Iraq," Energy, Elsevier, vol. 74(C), pages 762-774.
    18. Ceusters, Glenn & Rodríguez, Román Cantú & García, Alberte Bouso & Franke, Rüdiger & Deconinck, Geert & Helsen, Lieve & Nowé, Ann & Messagie, Maarten & Camargo, Luis Ramirez, 2021. "Model-predictive control and reinforcement learning in multi-energy system case studies," Applied Energy, Elsevier, vol. 303(C).
    19. Li, Jiawen & Yu, Tao & Yang, Bo, 2021. "A data-driven output voltage control of solid oxide fuel cell using multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 304(C).
    20. Perera, A.T.D. & Kamalaruban, Parameswaran, 2021. "Applications of reinforcement learning in energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 137(C).
    21. Vázquez-Canteli, José R. & Nagy, Zoltán, 2019. "Reinforcement learning for demand response: A review of algorithms and modeling techniques," Applied Energy, Elsevier, vol. 235(C), pages 1072-1089.
    22. Du, Yan & Zandi, Helia & Kotevska, Olivera & Kurte, Kuldeep & Munk, Jeffery & Amasyali, Kadir & Mckee, Evan & Li, Fangxing, 2021. "Intelligent multi-zone residential HVAC control strategy based on deep reinforcement learning," Applied Energy, Elsevier, vol. 281(C).
    23. Lee, Sangyoon & Choi, Dae-Hyun, 2021. "Dynamic pricing and energy management for profit maximization in multiple smart electric vehicle charging stations: A privacy-preserving deep reinforcement learning approach," Applied Energy, Elsevier, vol. 304(C).
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Lei, Yue & Zhan, Sicheng & Ono, Eikichi & Peng, Yuzhen & Zhang, Zhiang & Hasama, Takamasa & Chong, Adrian, 2022. "A practical deep reinforcement learning framework for multivariate occupant-centric control in buildings," Applied Energy, Elsevier, vol. 324(C).
    2. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    3. Mohammad Mahdi Forootan & Iman Larki & Rahim Zahedi & Abolfazl Ahmadi, 2022. "Machine Learning and Deep Learning in Energy Systems: A Review," Sustainability, MDPI, vol. 14(8), pages 1-49, April.
    4. Gao, Fang & Hu, Rongzhao & Yin, Linfei, 2023. "Variable boundary reinforcement learning for maximum power point tracking of photovoltaic grid-connected systems," Energy, Elsevier, vol. 264(C).
    5. Homod, Raad Z. & Togun, Hussein & Ateeq, Adnan A. & Al-Mousawi, Fadhel Noraldeen & Yaseen, Zaher Mundher & Al-Kouz, Wael & Hussein, Ahmed Kadhim & Alawi, Omer A. & Goodarzi, Marjan & Ahmadi, Goodarz, 2022. "An innovative clustering technique to generate hybrid modeling of cooling coils for energy analysis: A case study for control performance in HVAC systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 166(C).
    6. Yiting Kang & Jianlin Wu & Shilei Lu & Yashuai Yang & Zhen Yu & Haizhu Zhou & Shangqun Xie & Zheng Fu & Minchao Fan & Xiaolong Xu, 2022. "Comprehensive Carbon Emission and Economic Analysis on Nearly Zero-Energy Buildings in Different Regions of China," Sustainability, MDPI, vol. 14(16), pages 1-23, August.
    7. Mudhafar Al-Saadi & Maher Al-Greer & Michael Short, 2023. "Reinforcement Learning-Based Intelligent Control Strategies for Optimal Power Management in Advanced Power Distribution Systems: A Survey," Energies, MDPI, vol. 16(4), pages 1-38, February.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    2. Ayas Shaqour & Aya Hagishima, 2022. "Systematic Review on Deep Reinforcement Learning-Based Energy Management for Different Building Types," Energies, MDPI, vol. 15(22), pages 1-27, November.
    3. Qiu, Dawei & Wang, Yi & Hua, Weiqi & Strbac, Goran, 2023. "Reinforcement learning for electric vehicle applications in power systems:A critical review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 173(C).
    4. Pinto, Giuseppe & Kathirgamanathan, Anjukan & Mangina, Eleni & Finn, Donal P. & Capozzoli, Alfonso, 2022. "Enhancing energy management in grid-interactive buildings: A comparison among cooperative and coordinated architectures," Applied Energy, Elsevier, vol. 310(C).
    5. Zhang, Bin & Hu, Weihao & Ghias, Amer M.Y.M. & Xu, Xiao & Chen, Zhe, 2022. "Multi-agent deep reinforcement learning-based coordination control for grid-aware multi-buildings," Applied Energy, Elsevier, vol. 328(C).
    6. Homod, Raad Z. & Gaeid, Khalaf S. & Dawood, Suroor M. & Hatami, Alireza & Sahari, Khairul S., 2020. "Evaluation of energy-saving potential for optimal time response of HVAC control system in smart buildings," Applied Energy, Elsevier, vol. 271(C).
    7. Homod, Raad Z. & Togun, Hussein & Ateeq, Adnan A. & Al-Mousawi, Fadhel Noraldeen & Yaseen, Zaher Mundher & Al-Kouz, Wael & Hussein, Ahmed Kadhim & Alawi, Omer A. & Goodarzi, Marjan & Ahmadi, Goodarz, 2022. "An innovative clustering technique to generate hybrid modeling of cooling coils for energy analysis: A case study for control performance in HVAC systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 166(C).
    8. Shen, Rendong & Zhong, Shengyuan & Wen, Xin & An, Qingsong & Zheng, Ruifan & Li, Yang & Zhao, Jun, 2022. "Multi-agent deep reinforcement learning optimization framework for building energy system with renewable energy," Applied Energy, Elsevier, vol. 312(C).
    9. Biemann, Marco & Scheller, Fabian & Liu, Xiufeng & Huang, Lizhen, 2021. "Experimental evaluation of model-free reinforcement learning algorithms for continuous HVAC control," Applied Energy, Elsevier, vol. 298(C).
    10. Qiu, Dawei & Dong, Zihang & Zhang, Xi & Wang, Yi & Strbac, Goran, 2022. "Safe reinforcement learning for real-time automatic control in a smart energy-hub," Applied Energy, Elsevier, vol. 309(C).
    11. Gao, Yuan & Matsunami, Yuki & Miyata, Shohei & Akashi, Yasunori, 2022. "Multi-agent reinforcement learning dealing with hybrid action spaces: A case study for off-grid oriented renewable building energy system," Applied Energy, Elsevier, vol. 326(C).
    12. Blad, C. & Bøgh, S. & Kallesøe, C. & Raftery, Paul, 2023. "A laboratory test of an Offline-trained Multi-Agent Reinforcement Learning Algorithm for Heating Systems," Applied Energy, Elsevier, vol. 337(C).
    13. Nik, Vahid M. & Hosseini, Mohammad, 2023. "CIRLEM: a synergic integration of Collective Intelligence and Reinforcement learning in Energy Management for enhanced climate resilience and lightweight computation," Applied Energy, Elsevier, vol. 350(C).
    14. Barja-Martinez, Sara & Aragüés-Peñalba, Mònica & Munné-Collado, Íngrid & Lloret-Gallego, Pau & Bullich-Massagué, Eduard & Villafafila-Robles, Roberto, 2021. "Artificial intelligence techniques for enabling Big Data services in distribution networks: A review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 150(C).
    15. Homod, Raad Z., 2018. "Analysis and optimization of HVAC control systems based on energy and performance considerations for smart buildings," Renewable Energy, Elsevier, vol. 126(C), pages 49-64.
    16. Gokhale, Gargya & Claessens, Bert & Develder, Chris, 2022. "Physics informed neural networks for control oriented thermal modeling of buildings," Applied Energy, Elsevier, vol. 314(C).
    17. Ahmad, Tanveer & Madonski, Rafal & Zhang, Dongdong & Huang, Chao & Mujeeb, Asad, 2022. "Data-driven probabilistic machine learning in sustainable smart energy/smart energy systems: Key developments, challenges, and future research opportunities in the context of smart grid paradigm," Renewable and Sustainable Energy Reviews, Elsevier, vol. 160(C).
    18. Li, Yanxue & Wang, Zixuan & Xu, Wenya & Gao, Weijun & Xu, Yang & Xiao, Fu, 2023. "Modeling and energy dynamic control for a ZEH via hybrid model-based deep reinforcement learning," Energy, Elsevier, vol. 277(C).
    19. Pinto, Giuseppe & Deltetto, Davide & Capozzoli, Alfonso, 2021. "Data-driven district energy management with surrogate models and deep reinforcement learning," Applied Energy, Elsevier, vol. 304(C).
    20. Hernandez-Matheus, Alejandro & Löschenbrand, Markus & Berg, Kjersti & Fuchs, Ida & Aragüés-Peñalba, Mònica & Bullich-Massagué, Eduard & Sumper, Andreas, 2022. "A systematic review of machine learning techniques related to local energy communities," Renewable and Sustainable Energy Reviews, Elsevier, vol. 170(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:313:y:2022:i:c:s0306261922002975. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.