IDEAS home Printed from https://ideas.repec.org/a/eee/appene/v377y2025ipas0306261924017896.html
   My bibliography  Save this article

Centralised rehearsal of decentralised cooperation: Multi-agent reinforcement learning for the scalable coordination of residential energy flexibility

Author

Listed:
  • Charbonnier, Flora
  • Peng, Bei
  • Vienne, Julie
  • Stai, Eleni
  • Morstyn, Thomas
  • McCulloch, Malcolm

Abstract

This paper investigates the use of deep multi-agent reinforcement learning (MARL) for the coordination of residential energy flexibility. Particularly, we focus on achieving cooperation between homes in a way that is fully privacy-preserving, scalable, and that allows for the management of distribution network voltage constraints. Previous work demonstrated that MARL-based distributed control can be achieved with no sharing of personal data required during execution. However, previous cooperative MARL-based approaches impose an ever greater training computational burden as the size of the system increases, limiting scalability. Moreover, they do not manage their impact on distribution network constraints. We therefore adopt a deep multi-agent actor–critic method that uses a centralised but factored critic to rehearse coordination ahead of execution, such that homes can successfully cooperate at scale, with only first-order growth in computational time as the system size increases. Training times are thus 34 times shorter than with a previous state-of-the-art reinforcement learning approach without the factored critic for 30 homes. Moreover, experiments show that the cooperation of agents allows for a decrease of 47.2% in the likelihood of under-voltages. The results indicate that there is significant potential value for management of energy user bills, battery depreciation, and distribution network voltage management, with minimal information and communication infrastructure requirements, no interference with daily activities, and no sharing of personal data.

Suggested Citation

  • Charbonnier, Flora & Peng, Bei & Vienne, Julie & Stai, Eleni & Morstyn, Thomas & McCulloch, Malcolm, 2025. "Centralised rehearsal of decentralised cooperation: Multi-agent reinforcement learning for the scalable coordination of residential energy flexibility," Applied Energy, Elsevier, vol. 377(PA).
  • Handle: RePEc:eee:appene:v:377:y:2025:i:pa:s0306261924017896
    DOI: 10.1016/j.apenergy.2024.124406
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0306261924017896
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.apenergy.2024.124406?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to

    for a different version of it.

    References listed on IDEAS

    as
    1. Charbonnier, Flora & Morstyn, Thomas & McCulloch, Malcolm D., 2022. "Scalable multi-agent reinforcement learning for distributed control of residential energy flexibility," Applied Energy, Elsevier, vol. 314(C).
    2. Zhang, Xiaoshun & Bao, Tao & Yu, Tao & Yang, Bo & Han, Chuanjia, 2017. "Deep transfer Q-learning with virtual leader-follower for supply-demand Stackelberg game of smart grid," Energy, Elsevier, vol. 133(C), pages 348-365.
    3. Vázquez-Canteli, José R. & Nagy, Zoltán, 2019. "Reinforcement learning for demand response: A review of algorithms and modeling techniques," Applied Energy, Elsevier, vol. 235(C), pages 1072-1089.
    4. Guerrero, Jaysson & Gebbran, Daniel & Mhanna, Sleiman & Chapman, Archie C. & Verbič, Gregor, 2020. "Towards a transactive energy system for integration of distributed energy resources: Home energy management, distributed optimal power flow, and peer-to-peer energy trading," Renewable and Sustainable Energy Reviews, Elsevier, vol. 132(C).
    5. Jacopo Torriti, 2022. "Household electricity demand, the intrinsic flexibility index and UK wholesale electricity market prices," Environmental Economics and Policy Studies, Springer;Society for Environmental Economics and Policy Studies - SEEPS, vol. 24(1), pages 7-27, January.
    6. Jin-Gyeom Kim & Bowon Lee, 2020. "Automatic P2P Energy Trading Model Based on Reinforcement Learning Using Long Short-Term Delayed Reward," Energies, MDPI, vol. 13(20), pages 1-27, October.
    7. Crozier, Constance & Apostolopoulou, Dimitra & McCulloch, Malcolm, 2018. "Mitigating the impact of personal vehicle electrification: A power generation perspective," Energy Policy, Elsevier, vol. 118(C), pages 474-481.
    8. Darby, Sarah J., 2020. "Demand response and smart technology in theory and practice: Customer experiences and system actors," Energy Policy, Elsevier, vol. 143(C).
    9. Lu, Renzhi & Hong, Seung Ho, 2019. "Incentive-based demand response for smart grid with reinforcement learning and deep neural network," Applied Energy, Elsevier, vol. 236(C), pages 937-949.
    10. Charbonnier, Flora & Morstyn, Thomas & McCulloch, Malcolm D., 2022. "Coordination of resources at the edge of the electricity grid: Systematic review and taxonomy," Applied Energy, Elsevier, vol. 318(C).
    11. Dufo-López, Rodolfo & Lujano-Rojas, Juan M. & Bernal-Agustín, José L., 2014. "Comparison of different lead–acid battery lifetime prediction models for use in simulation of stand-alone photovoltaic systems," Applied Energy, Elsevier, vol. 115(C), pages 242-253.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Charbonnier, Flora & Morstyn, Thomas & McCulloch, Malcolm D., 2022. "Scalable multi-agent reinforcement learning for distributed control of residential energy flexibility," Applied Energy, Elsevier, vol. 314(C).
    2. Charbonnier, Flora & Morstyn, Thomas & McCulloch, Malcolm D., 2022. "Coordination of resources at the edge of the electricity grid: Systematic review and taxonomy," Applied Energy, Elsevier, vol. 318(C).
    3. Pinto, Giuseppe & Piscitelli, Marco Savino & Vázquez-Canteli, José Ramón & Nagy, Zoltán & Capozzoli, Alfonso, 2021. "Coordinated energy management for a cluster of buildings through deep reinforcement learning," Energy, Elsevier, vol. 229(C).
    4. Pinto, Giuseppe & Deltetto, Davide & Capozzoli, Alfonso, 2021. "Data-driven district energy management with surrogate models and deep reinforcement learning," Applied Energy, Elsevier, vol. 304(C).
    5. Tsaousoglou, Georgios & Giraldo, Juan S. & Paterakis, Nikolaos G., 2022. "Market Mechanisms for Local Electricity Markets: A review of models, solution concepts and algorithmic techniques," Renewable and Sustainable Energy Reviews, Elsevier, vol. 156(C).
    6. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    7. Correa-Jullian, Camila & López Droguett, Enrique & Cardemil, José Miguel, 2020. "Operation scheduling in a solar thermal system: A reinforcement learning-based framework," Applied Energy, Elsevier, vol. 268(C).
    8. Ibrahim, Muhammad Sohail & Dong, Wei & Yang, Qiang, 2020. "Machine learning driven smart electric power systems: Current trends and new perspectives," Applied Energy, Elsevier, vol. 272(C).
    9. Wen, Lulu & Zhou, Kaile & Li, Jun & Wang, Shanyong, 2020. "Modified deep learning and reinforcement learning for an incentive-based demand response model," Energy, Elsevier, vol. 205(C).
    10. Pallonetto, Fabiano & De Rosa, Mattia & Milano, Federico & Finn, Donal P., 2019. "Demand response algorithms for smart-grid ready residential buildings using machine learning models," Applied Energy, Elsevier, vol. 239(C), pages 1265-1282.
    11. Ajagekar, Akshay & Decardi-Nelson, Benjamin & You, Fengqi, 2024. "Energy management for demand response in networked greenhouses with multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 355(C).
    12. Cai, Qiran & Xu, Qingyang & Qing, Jing & Shi, Gang & Liang, Qiao-Mei, 2022. "Promoting wind and photovoltaics renewable energy integration through demand response: Dynamic pricing mechanism design and economic analysis for smart residential communities," Energy, Elsevier, vol. 261(PB).
    13. Seongwoo Lee & Joonho Seon & Byungsun Hwang & Soohyun Kim & Youngghyu Sun & Jinyoung Kim, 2024. "Recent Trends and Issues of Energy Management Systems Using Machine Learning," Energies, MDPI, vol. 17(3), pages 1-24, January.
    14. Park, Keonwoo & Moon, Ilkyeong, 2022. "Multi-agent deep reinforcement learning approach for EV charging scheduling in a smart grid," Applied Energy, Elsevier, vol. 328(C).
    15. Hernandez-Matheus, Alejandro & Löschenbrand, Markus & Berg, Kjersti & Fuchs, Ida & Aragüés-Peñalba, Mònica & Bullich-Massagué, Eduard & Sumper, Andreas, 2022. "A systematic review of machine learning techniques related to local energy communities," Renewable and Sustainable Energy Reviews, Elsevier, vol. 170(C).
    16. Ajagekar, Akshay & You, Fengqi, 2024. "Variational quantum circuit based demand response in buildings leveraging a hybrid quantum-classical strategy," Applied Energy, Elsevier, vol. 364(C).
    17. Golmohamadi, Hessam, 2022. "Demand-side management in industrial sector: A review of heavy industries," Renewable and Sustainable Energy Reviews, Elsevier, vol. 156(C).
    18. Antonopoulos, Ioannis & Robu, Valentin & Couraud, Benoit & Kirli, Desen & Norbu, Sonam & Kiprakis, Aristides & Flynn, David & Elizondo-Gonzalez, Sergio & Wattam, Steve, 2020. "Artificial intelligence and machine learning approaches to energy demand-side response: A systematic review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 130(C).
    19. Davide Deltetto & Davide Coraci & Giuseppe Pinto & Marco Savino Piscitelli & Alfonso Capozzoli, 2021. "Exploring the Potentialities of Deep Reinforcement Learning for Incentive-Based Demand Response in a Cluster of Small Commercial Buildings," Energies, MDPI, vol. 14(10), pages 1-25, May.
    20. Yi Kuang & Xiuli Wang & Hongyang Zhao & Yijun Huang & Xianlong Chen & Xifan Wang, 2020. "Agent-Based Energy Sharing Mechanism Using Deep Deterministic Policy Gradient Algorithm," Energies, MDPI, vol. 13(19), pages 1-20, September.

    More about this item

    Keywords

    ;
    ;
    ;
    ;
    ;
    ;

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:377:y:2025:i:pa:s0306261924017896. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.