IDEAS home Printed from https://ideas.repec.org/a/eee/appene/v314y2022ics0306261922002689.html
   My bibliography  Save this article

Scalable multi-agent reinforcement learning for distributed control of residential energy flexibility

Author

Listed:
  • Charbonnier, Flora
  • Morstyn, Thomas
  • McCulloch, Malcolm D.

Abstract

This paper proposes a novel scalable type of multi-agent reinforcement learning-based coordination for distributed residential energy. Cooperating agents learn to control the flexibility offered by electric vehicles, space heating and flexible loads in a partially observable stochastic environment. In the standard independent Q-learning approach, the coordination performance of agents under partial observability drops at scale in stochastic environments. Here, the novel combination of learning from off-line convex optimisations on historical data and isolating marginal contributions to total rewards in reward signals increases stability and performance at scale. Using fixed-size Q-tables, prosumers are able to assess their marginal impact on total system objectives without sharing personal data either with each other or with a central coordinator. Case studies are used to assess the fitness of different combinations of exploration sources, reward definitions, and multi-agent learning frameworks. It is demonstrated that the proposed strategies create value at individual and system levels thanks to reductions in the costs of energy imports, losses, distribution network congestion, battery depreciation and greenhouse gas emissions.

Suggested Citation

  • Charbonnier, Flora & Morstyn, Thomas & McCulloch, Malcolm D., 2022. "Scalable multi-agent reinforcement learning for distributed control of residential energy flexibility," Applied Energy, Elsevier, vol. 314(C).
  • Handle: RePEc:eee:appene:v:314:y:2022:i:c:s0306261922002689
    DOI: 10.1016/j.apenergy.2022.118825
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0306261922002689
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.apenergy.2022.118825?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Crozier, Constance & Apostolopoulou, Dimitra & McCulloch, Malcolm, 2018. "Mitigating the impact of personal vehicle electrification: A power generation perspective," Energy Policy, Elsevier, vol. 118(C), pages 474-481.
    2. Lu, Renzhi & Hong, Seung Ho, 2019. "Incentive-based demand response for smart grid with reinforcement learning and deep neural network," Applied Energy, Elsevier, vol. 236(C), pages 937-949.
    3. Schellenberg, C. & Lohan, J. & Dimache, L., 2020. "Comparison of metaheuristic optimisation methods for grid-edge technology that leverages heat pumps and thermal energy storage," Renewable and Sustainable Energy Reviews, Elsevier, vol. 131(C).
    4. Antonopoulos, Ioannis & Robu, Valentin & Couraud, Benoit & Kirli, Desen & Norbu, Sonam & Kiprakis, Aristides & Flynn, David & Elizondo-Gonzalez, Sergio & Wattam, Steve, 2020. "Artificial intelligence and machine learning approaches to energy demand-side response: A systematic review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 130(C).
    5. Darby, Sarah J., 2020. "Demand response and smart technology in theory and practice: Customer experiences and system actors," Energy Policy, Elsevier, vol. 143(C).
    6. Thomas Morstyn & Niall Farrell & Sarah J. Darby & Malcolm D. McCulloch, 2018. "Using peer-to-peer energy-trading platforms to incentivize prosumers to form federated power plants," Nature Energy, Nature, vol. 3(2), pages 94-101, February.
    7. Guerrero, Jaysson & Gebbran, Daniel & Mhanna, Sleiman & Chapman, Archie C. & Verbič, Gregor, 2020. "Towards a transactive energy system for integration of distributed energy resources: Home energy management, distributed optimal power flow, and peer-to-peer energy trading," Renewable and Sustainable Energy Reviews, Elsevier, vol. 132(C).
    8. Jin-Gyeom Kim & Bowon Lee, 2020. "Automatic P2P Energy Trading Model Based on Reinforcement Learning Using Long Short-Term Delayed Reward," Energies, MDPI, vol. 13(20), pages 1-27, October.
    9. Charbonnier, Flora & Morstyn, Thomas & McCulloch, Malcolm D., 2022. "Coordination of resources at the edge of the electricity grid: Systematic review and taxonomy," Applied Energy, Elsevier, vol. 318(C).
    10. Zhang, Xiaoshun & Bao, Tao & Yu, Tao & Yang, Bo & Han, Chuanjia, 2017. "Deep transfer Q-learning with virtual leader-follower for supply-demand Stackelberg game of smart grid," Energy, Elsevier, vol. 133(C), pages 348-365.
    11. Oriol Vinyals & Igor Babuschkin & Wojciech M. Czarnecki & Michaël Mathieu & Andrew Dudzik & Junyoung Chung & David H. Choi & Richard Powell & Timo Ewalds & Petko Georgiev & Junhyuk Oh & Dan Horgan & M, 2019. "Grandmaster level in StarCraft II using multi-agent reinforcement learning," Nature, Nature, vol. 575(7782), pages 350-354, November.
    12. Vázquez-Canteli, José R. & Nagy, Zoltán, 2019. "Reinforcement learning for demand response: A review of algorithms and modeling techniques," Applied Energy, Elsevier, vol. 235(C), pages 1072-1089.
    13. Dufo-López, Rodolfo & Lujano-Rojas, Juan M. & Bernal-Agustín, José L., 2014. "Comparison of different lead–acid battery lifetime prediction models for use in simulation of stand-alone photovoltaic systems," Applied Energy, Elsevier, vol. 115(C), pages 242-253.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Charbonnier, Flora & Morstyn, Thomas & McCulloch, Malcolm D., 2022. "Coordination of resources at the edge of the electricity grid: Systematic review and taxonomy," Applied Energy, Elsevier, vol. 318(C).
    2. Cai, Qiran & Xu, Qingyang & Qing, Jing & Shi, Gang & Liang, Qiao-Mei, 2022. "Promoting wind and photovoltaics renewable energy integration through demand response: Dynamic pricing mechanism design and economic analysis for smart residential communities," Energy, Elsevier, vol. 261(PB).
    3. Nik, Vahid M. & Hosseini, Mohammad, 2023. "CIRLEM: a synergic integration of Collective Intelligence and Reinforcement learning in Energy Management for enhanced climate resilience and lightweight computation," Applied Energy, Elsevier, vol. 350(C).

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Charbonnier, Flora & Morstyn, Thomas & McCulloch, Malcolm D., 2022. "Coordination of resources at the edge of the electricity grid: Systematic review and taxonomy," Applied Energy, Elsevier, vol. 318(C).
    2. Tsaousoglou, Georgios & Giraldo, Juan S. & Paterakis, Nikolaos G., 2022. "Market Mechanisms for Local Electricity Markets: A review of models, solution concepts and algorithmic techniques," Renewable and Sustainable Energy Reviews, Elsevier, vol. 156(C).
    3. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    4. Pinto, Giuseppe & Piscitelli, Marco Savino & Vázquez-Canteli, José Ramón & Nagy, Zoltán & Capozzoli, Alfonso, 2021. "Coordinated energy management for a cluster of buildings through deep reinforcement learning," Energy, Elsevier, vol. 229(C).
    5. Pinto, Giuseppe & Deltetto, Davide & Capozzoli, Alfonso, 2021. "Data-driven district energy management with surrogate models and deep reinforcement learning," Applied Energy, Elsevier, vol. 304(C).
    6. Hernandez-Matheus, Alejandro & Löschenbrand, Markus & Berg, Kjersti & Fuchs, Ida & Aragüés-Peñalba, Mònica & Bullich-Massagué, Eduard & Sumper, Andreas, 2022. "A systematic review of machine learning techniques related to local energy communities," Renewable and Sustainable Energy Reviews, Elsevier, vol. 170(C).
    7. Davide Deltetto & Davide Coraci & Giuseppe Pinto & Marco Savino Piscitelli & Alfonso Capozzoli, 2021. "Exploring the Potentialities of Deep Reinforcement Learning for Incentive-Based Demand Response in a Cluster of Small Commercial Buildings," Energies, MDPI, vol. 14(10), pages 1-25, May.
    8. Yi Kuang & Xiuli Wang & Hongyang Zhao & Yijun Huang & Xianlong Chen & Xifan Wang, 2020. "Agent-Based Energy Sharing Mechanism Using Deep Deterministic Policy Gradient Algorithm," Energies, MDPI, vol. 13(19), pages 1-20, September.
    9. Han, Gwangwoo & Joo, Hong-Jin & Lim, Hee-Won & An, Young-Sub & Lee, Wang-Je & Lee, Kyoung-Ho, 2023. "Data-driven heat pump operation strategy using rainbow deep reinforcement learning for significant reduction of electricity cost," Energy, Elsevier, vol. 270(C).
    10. Bhuiyan, Erphan A. & Hossain, Md. Zahid & Muyeen, S.M. & Fahim, Shahriar Rahman & Sarker, Subrata K. & Das, Sajal K., 2021. "Towards next generation virtual power plant: Technology review and frameworks," Renewable and Sustainable Energy Reviews, Elsevier, vol. 150(C).
    11. Tushar, Wayes & Yuen, Chau & Saha, Tapan K. & Morstyn, Thomas & Chapman, Archie C. & Alam, M. Jan E. & Hanif, Sarmad & Poor, H. Vincent, 2021. "Peer-to-peer energy systems for connected communities: A review of recent advances and emerging challenges," Applied Energy, Elsevier, vol. 282(PA).
    12. Barja-Martinez, Sara & Aragüés-Peñalba, Mònica & Munné-Collado, Íngrid & Lloret-Gallego, Pau & Bullich-Massagué, Eduard & Villafafila-Robles, Roberto, 2021. "Artificial intelligence techniques for enabling Big Data services in distribution networks: A review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 150(C).
    13. Zhou, Yuekuan & Lund, Peter D., 2023. "Peer-to-peer energy sharing and trading of renewable energy in smart communities ─ trading pricing models, decision-making and agent-based collaboration," Renewable Energy, Elsevier, vol. 207(C), pages 177-193.
    14. Correa-Jullian, Camila & López Droguett, Enrique & Cardemil, José Miguel, 2020. "Operation scheduling in a solar thermal system: A reinforcement learning-based framework," Applied Energy, Elsevier, vol. 268(C).
    15. Ahmad, Tanveer & Madonski, Rafal & Zhang, Dongdong & Huang, Chao & Mujeeb, Asad, 2022. "Data-driven probabilistic machine learning in sustainable smart energy/smart energy systems: Key developments, challenges, and future research opportunities in the context of smart grid paradigm," Renewable and Sustainable Energy Reviews, Elsevier, vol. 160(C).
    16. Ibrahim, Muhammad Sohail & Dong, Wei & Yang, Qiang, 2020. "Machine learning driven smart electric power systems: Current trends and new perspectives," Applied Energy, Elsevier, vol. 272(C).
    17. Maarten Wolsink, 2020. "Framing in Renewable Energy Policies: A Glossary," Energies, MDPI, vol. 13(11), pages 1-31, June.
    18. Davarzani, Sima & Pisica, Ioana & Taylor, Gareth A. & Munisami, Kevin J., 2021. "Residential Demand Response Strategies and Applications in Active Distribution Network Management," Renewable and Sustainable Energy Reviews, Elsevier, vol. 138(C).
    19. Wen, Lulu & Zhou, Kaile & Li, Jun & Wang, Shanyong, 2020. "Modified deep learning and reinforcement learning for an incentive-based demand response model," Energy, Elsevier, vol. 205(C).
    20. Pallonetto, Fabiano & De Rosa, Mattia & Milano, Federico & Finn, Donal P., 2019. "Demand response algorithms for smart-grid ready residential buildings using machine learning models," Applied Energy, Elsevier, vol. 239(C), pages 1265-1282.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:314:y:2022:i:c:s0306261922002689. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.