IDEAS home Printed from https://ideas.repec.org/a/eee/appene/v350y2023ics0306261923011492.html
   My bibliography  Save this article

CIRLEM: a synergic integration of Collective Intelligence and Reinforcement learning in Energy Management for enhanced climate resilience and lightweight computation

Author

Listed:
  • Nik, Vahid M.
  • Hosseini, Mohammad

Abstract

A novel energy management (EM) approach is introduced, integrating core elements of collective intelligence (CI) and reinforcement learning (RL) and called CIRLEM. It operates by distributing a flexibility signal from the energy supplier to agents within the grid, prompting their responsive actions. The flexibility signal reflects upon the collective behaviour of the agents in the grid and agents learn and decide using a value-based model-free RL engine. Two ways of running CIRLEM are defined, based on doing all the decision making only at the edge node (Edge Node Control or ENC) or together with the cluster (Edge node and Cluster Control or ECC). CIRLEM's performance is thoroughly investigated in an elderly building situated in Ålesund, Norway, specifically during extreme warm and cold seasons in the future climate. The building is divided into 20 thermal zones, each acting as an agent with three control strategies. CIRLEM undergoes comprehensive testing, evaluating policies with 24 and 48 sets of actions (referred to as L24 and L48) and six different randomness levels. The results demonstrate that CIRLEM swiftly converges to an optimal solution (the optimum set of policies), offering both enhanced indoor comfort and significant energy savings. Among the CIRLEM algorithms, ENC-L24, the fastest and simplest one, showcased outstanding performance. Overall, CIRLEM offers a remarkable improvement in energy flexibility and climate resilience for a group of grid-connected agents, ensuring energy savings without compromising indoor comfort.

Suggested Citation

  • Nik, Vahid M. & Hosseini, Mohammad, 2023. "CIRLEM: a synergic integration of Collective Intelligence and Reinforcement learning in Energy Management for enhanced climate resilience and lightweight computation," Applied Energy, Elsevier, vol. 350(C).
  • Handle: RePEc:eee:appene:v:350:y:2023:i:c:s0306261923011492
    DOI: 10.1016/j.apenergy.2023.121785
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0306261923011492
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.apenergy.2023.121785?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Charbonnier, Flora & Morstyn, Thomas & McCulloch, Malcolm D., 2022. "Scalable multi-agent reinforcement learning for distributed control of residential energy flexibility," Applied Energy, Elsevier, vol. 314(C).
    2. Wang, Zhe & Hong, Tianzhen, 2020. "Reinforcement learning for building controls: The opportunities and challenges," Applied Energy, Elsevier, vol. 269(C).
    3. Zhuang, Dian & Gan, Vincent J.L. & Duygu Tekler, Zeynep & Chong, Adrian & Tian, Shuai & Shi, Xing, 2023. "Data-driven predictive control for smart HVAC system in IoT-integrated buildings with time-series forecasting and reinforcement learning," Applied Energy, Elsevier, vol. 338(C).
    4. Biemann, Marco & Scheller, Fabian & Liu, Xiufeng & Huang, Lizhen, 2021. "Experimental evaluation of model-free reinforcement learning algorithms for continuous HVAC control," Applied Energy, Elsevier, vol. 298(C).
    5. Perera, A.T.D. & Javanroodi, Kavan & Nik, Vahid M., 2021. "Climate resilient interconnected infrastructure: Co-optimization of energy systems and urban morphology," Applied Energy, Elsevier, vol. 285(C).
    6. Lund, Peter D. & Lindgren, Juuso & Mikkola, Jani & Salpakari, Jyri, 2015. "Review of energy system flexibility measures to enable high levels of variable renewable electricity," Renewable and Sustainable Energy Reviews, Elsevier, vol. 45(C), pages 785-807.
    7. Lu, Renzhi & Hong, Seung Ho & Zhang, Xiongfeng, 2018. "A Dynamic pricing demand response algorithm for smart grid: Reinforcement learning approach," Applied Energy, Elsevier, vol. 220(C), pages 220-230.
    8. Perera, A.T.D. & Wickramasinghe, P.U. & Nik, Vahid M. & Scartezzini, Jean-Louis, 2020. "Introducing reinforcement learning to the energy system design process," Applied Energy, Elsevier, vol. 262(C).
    9. Sergey V. Buldyrev & Roni Parshani & Gerald Paul & H. Eugene Stanley & Shlomo Havlin, 2010. "Catastrophic cascade of failures in interdependent networks," Nature, Nature, vol. 464(7291), pages 1025-1028, April.
    10. Coraci, Davide & Brandi, Silvio & Hong, Tianzhen & Capozzoli, Alfonso, 2023. "Online transfer learning strategy for enhancing the scalability and deployment of deep reinforcement learning control in smart buildings," Applied Energy, Elsevier, vol. 333(C).
    11. Li, Yanxue & Wang, Zixuan & Xu, Wenya & Gao, Weijun & Xu, Yang & Xiao, Fu, 2023. "Modeling and energy dynamic control for a ZEH via hybrid model-based deep reinforcement learning," Energy, Elsevier, vol. 277(C).
    12. Fu, Yangyang & Xu, Shichao & Zhu, Qi & O’Neill, Zheng & Adetola, Veronica, 2023. "How good are learning-based control v.s. model-based control for load shifting? Investigations on a single zone building energy system," Energy, Elsevier, vol. 273(C).
    13. Perera, A.T.D. & Kamalaruban, Parameswaran, 2021. "Applications of reinforcement learning in energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 137(C).
    14. Takao Sasaki & Dora Biro, 2017. "Cumulative culture can emerge from collective intelligence in animal groups," Nature Communications, Nature, vol. 8(1), pages 1-6, April.
    15. Vázquez-Canteli, José R. & Nagy, Zoltán, 2019. "Reinforcement learning for demand response: A review of algorithms and modeling techniques," Applied Energy, Elsevier, vol. 235(C), pages 1072-1089.
    16. A. K. Magnan & E.L.F. Schipper & M. Burkett & S. Bharwani & I. Burton & S. Eriksen & F. Gemenne & J. Schaar & G. Ziervogel, 2016. "Addressing the risk of maladaptation to climate change," Wiley Interdisciplinary Reviews: Climate Change, John Wiley & Sons, vol. 7(5), pages 646-665, September.
    17. Chun Wei & Xiaoqing Bai & Taesic Kim, 2020. "Advanced Control and Optimization for Complex Energy Systems," Complexity, Hindawi, vol. 2020, pages 1-3, March.
    18. Strbac, Goran, 2008. "Demand side management: Benefits and challenges," Energy Policy, Elsevier, vol. 36(12), pages 4419-4426, December.
    19. Nik, Vahid M. & Moazami, Amin, 2021. "Using collective intelligence to enhance demand flexibility and climate resilience in urban areas," Applied Energy, Elsevier, vol. 281(C).
    20. Nik, Vahid M., 2016. "Making energy simulation easier for future climate – Synthesizing typical and extreme weather data sets out of regional climate models (RCMs)," Applied Energy, Elsevier, vol. 177(C), pages 204-226.
    21. A. T. D. Perera & Vahid M. Nik & Deliang Chen & Jean-Louis Scartezzini & Tianzhen Hong, 2020. "Quantifying the impacts of climate change and extreme climate events on energy systems," Nature Energy, Nature, vol. 5(2), pages 150-159, February.
    22. Yuan Liu & Yan Peng & Min Wang & Jiajia Xie & Rui Zhou, 2020. "Multi-USV System Cooperative Underwater Target Search Based on Reinforcement Learning and Probability Map," Mathematical Problems in Engineering, Hindawi, vol. 2020, pages 1-12, May.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Nik, Vahid M. & Moazami, Amin, 2021. "Using collective intelligence to enhance demand flexibility and climate resilience in urban areas," Applied Energy, Elsevier, vol. 281(C).
    2. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    3. Perera, A.T.D. & Kamalaruban, Parameswaran, 2021. "Applications of reinforcement learning in energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 137(C).
    4. Pinto, Giuseppe & Kathirgamanathan, Anjukan & Mangina, Eleni & Finn, Donal P. & Capozzoli, Alfonso, 2022. "Enhancing energy management in grid-interactive buildings: A comparison among cooperative and coordinated architectures," Applied Energy, Elsevier, vol. 310(C).
    5. Homod, Raad Z. & Togun, Hussein & Kadhim Hussein, Ahmed & Noraldeen Al-Mousawi, Fadhel & Yaseen, Zaher Mundher & Al-Kouz, Wael & Abd, Haider J. & Alawi, Omer A. & Goodarzi, Marjan & Hussein, Omar A., 2022. "Dynamics analysis of a novel hybrid deep clustering for unsupervised learning by reinforcement of multi-agent to energy saving in intelligent buildings," Applied Energy, Elsevier, vol. 313(C).
    6. Yang, Yuchen & Javanroodi, Kavan & Nik, Vahid M., 2021. "Climate change and energy performance of European residential building stocks – A comprehensive impact assessment using climate big data from the coordinated regional climate downscaling experiment," Applied Energy, Elsevier, vol. 298(C).
    7. Pinto, Giuseppe & Deltetto, Davide & Capozzoli, Alfonso, 2021. "Data-driven district energy management with surrogate models and deep reinforcement learning," Applied Energy, Elsevier, vol. 304(C).
    8. Cai, Qiran & Xu, Qingyang & Qing, Jing & Shi, Gang & Liang, Qiao-Mei, 2022. "Promoting wind and photovoltaics renewable energy integration through demand response: Dynamic pricing mechanism design and economic analysis for smart residential communities," Energy, Elsevier, vol. 261(PB).
    9. Davide Deltetto & Davide Coraci & Giuseppe Pinto & Marco Savino Piscitelli & Alfonso Capozzoli, 2021. "Exploring the Potentialities of Deep Reinforcement Learning for Incentive-Based Demand Response in a Cluster of Small Commercial Buildings," Energies, MDPI, vol. 14(10), pages 1-25, May.
    10. Perera, A.T.D. & Zhao, Bingyu & Wang, Zhe & Soga, Kenichi & Hong, Tianzhen, 2023. "Optimal design of microgrids to improve wildfire resilience for vulnerable communities at the wildland-urban interface," Applied Energy, Elsevier, vol. 335(C).
    11. Dimitrios Vamvakas & Panagiotis Michailidis & Christos Korkas & Elias Kosmatopoulos, 2023. "Review and Evaluation of Reinforcement Learning Frameworks on Smart Grid Applications," Energies, MDPI, vol. 16(14), pages 1-38, July.
    12. Biemann, Marco & Scheller, Fabian & Liu, Xiufeng & Huang, Lizhen, 2021. "Experimental evaluation of model-free reinforcement learning algorithms for continuous HVAC control," Applied Energy, Elsevier, vol. 298(C).
    13. Xie, Jiahan & Ajagekar, Akshay & You, Fengqi, 2023. "Multi-Agent attention-based deep reinforcement learning for demand response in grid-responsive buildings," Applied Energy, Elsevier, vol. 342(C).
    14. Ayas Shaqour & Aya Hagishima, 2022. "Systematic Review on Deep Reinforcement Learning-Based Energy Management for Different Building Types," Energies, MDPI, vol. 15(22), pages 1-27, November.
    15. Plaga, Leonie Sara & Bertsch, Valentin, 2023. "Methods for assessing climate uncertainty in energy system models — A systematic literature review," Applied Energy, Elsevier, vol. 331(C).
    16. Blad, C. & Bøgh, S. & Kallesøe, C. & Raftery, Paul, 2023. "A laboratory test of an Offline-trained Multi-Agent Reinforcement Learning Algorithm for Heating Systems," Applied Energy, Elsevier, vol. 337(C).
    17. Sun, Fangyuan & Kong, Xiangyu & Wu, Jianzhong & Gao, Bixuan & Chen, Ke & Lu, Ning, 2022. "DSM pricing method based on A3C and LSTM under cloud-edge environment," Applied Energy, Elsevier, vol. 315(C).
    18. Barja-Martinez, Sara & Aragüés-Peñalba, Mònica & Munné-Collado, Íngrid & Lloret-Gallego, Pau & Bullich-Massagué, Eduard & Villafafila-Robles, Roberto, 2021. "Artificial intelligence techniques for enabling Big Data services in distribution networks: A review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 150(C).
    19. Qiu, Dawei & Wang, Yi & Hua, Weiqi & Strbac, Goran, 2023. "Reinforcement learning for electric vehicle applications in power systems:A critical review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 173(C).
    20. Ribó-Pérez, D. & Carrión, A. & Rodríguez García, J. & Álvarez Bel, C., 2021. "Ex-post evaluation of Interruptible Load programs with a system optimisation perspective," Applied Energy, Elsevier, vol. 303(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:350:y:2023:i:c:s0306261923011492. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.