IDEAS home Printed from https://ideas.repec.org/a/eee/appene/v324y2022ics0306261922009850.html
   My bibliography  Save this article

Resilience enhancement of multi-agent reinforcement learning-based demand response against adversarial attacks

Author

Listed:
  • Zeng, Lanting
  • Qiu, Dawei
  • Sun, Mingyang

Abstract

Demand response improves grid security by adjusting the flexibility of consumers meanwhile maintaining their demand–supply balance in real-time. With the large-scale deployment of distributed digital communication technologies and advanced metering infrastructures, data-driven approaches such as multi-agent reinforcement learning (MARL) are being widely employed to solve demand response problems. Nevertheless, the massive interaction of data inside and outside the demand response management system may lead to severe threats from the perspective of cyber-attacks. The cyber security requirements of MARL-based demand response problems are less discussed in the existing studies. To this end, this paper proposes a robust adversarial multi-agent reinforcement learning framework for demand response (RAMARL-DR) with an enhanced resilience against adversarial attacks. In particular, the proposed RAMARL-DR first constructs an adversary agent that aims to cause the worst-case performance via formulating an adversarial attack; and then adopts periodic alternating robust adversarial training scenarios with the optimal adversary aiming to diminish the severe impacts induced by adversarial attacks. Case studies are conducted based on an OpenAI Gym environment CityLearn, which provides a standard evaluation platform of MARL algorithms for demand response problems. Empirical results indicate that the MARL-based demand response management system is vulnerable when the adversary agent occurs, and its performance can be significantly improved after periodic alternating robust adversarial training. It can be found that the adversary agent can result in a 41.43% higher metric value of Ramping than the no adversary case, whereas the proposed RAMARL-DR can significantly enhance the system resilience with an approximately 38.85% reduction in the ramping of net demand.

Suggested Citation

  • Zeng, Lanting & Qiu, Dawei & Sun, Mingyang, 2022. "Resilience enhancement of multi-agent reinforcement learning-based demand response against adversarial attacks," Applied Energy, Elsevier, vol. 324(C).
  • Handle: RePEc:eee:appene:v:324:y:2022:i:c:s0306261922009850
    DOI: 10.1016/j.apenergy.2022.119688
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0306261922009850
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.apenergy.2022.119688?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Qiu, Dawei & Dong, Zihang & Zhang, Xi & Wang, Yi & Strbac, Goran, 2022. "Safe reinforcement learning for real-time automatic control in a smart energy-hub," Applied Energy, Elsevier, vol. 309(C).
    2. Wang, Qi & Zhang, Chunyu & Ding, Yi & Xydis, George & Wang, Jianhui & Østergaard, Jacob, 2015. "Review of real-time electricity markets for integrating Distributed Energy Resources and Demand Response," Applied Energy, Elsevier, vol. 138(C), pages 695-706.
    3. Qiu, Dawei & Ye, Yujian & Papadaskalopoulos, Dimitrios & Strbac, Goran, 2021. "Scalable coordinated management of peer-to-peer energy trading: A multi-cluster deep reinforcement learning approach," Applied Energy, Elsevier, vol. 292(C).
    4. Lu, Renzhi & Hong, Seung Ho, 2019. "Incentive-based demand response for smart grid with reinforcement learning and deep neural network," Applied Energy, Elsevier, vol. 236(C), pages 937-949.
    5. Coppolino, Luigi & D׳Antonio, Salvatore & Romano, Luigi, 2014. "Exposing vulnerabilities in electric power grids: An experimental approach," International Journal of Critical Infrastructure Protection, Elsevier, vol. 7(1), pages 51-60.
    6. Kelley, Morgan T. & Pattison, Richard C. & Baldick, Ross & Baldea, Michael, 2018. "An MILP framework for optimizing demand response operation of air separation units," Applied Energy, Elsevier, vol. 222(C), pages 951-966.
    7. Lu, Renzhi & Li, Yi-Chang & Li, Yuting & Jiang, Junhui & Ding, Yuemin, 2020. "Multi-agent deep reinforcement learning based demand response for discrete manufacturing systems energy management," Applied Energy, Elsevier, vol. 276(C).
    8. Kazmi, Hussain & Suykens, Johan & Balint, Attila & Driesen, Johan, 2019. "Multi-agent reinforcement learning for modeling and control of thermostatically controlled loads," Applied Energy, Elsevier, vol. 238(C), pages 1022-1035.
    9. Heidari, A. & Mortazavi, S.S. & Bansal, R.C., 2020. "Stochastic effects of ice storage on improvement of an energy hub optimal operation including demand response and renewable energies," Applied Energy, Elsevier, vol. 261(C).
    10. Ignacio J. Perez-Arriaga & Carlos Batlle, 2012. "Impacts of Intermittent Renewables on Electricity Generation System Operation," Economics of Energy & Environmental Policy, International Association for Energy Economics, vol. 0(Number 2).
    11. Wang, Jianxiao & Zhong, Haiwang & Ma, Ziming & Xia, Qing & Kang, Chongqing, 2017. "Review and prospect of integrated demand response in the multi-energy system," Applied Energy, Elsevier, vol. 202(C), pages 772-782.
    12. Bianchini, Gianni & Casini, Marco & Vicino, Antonio & Zarrilli, Donato, 2016. "Demand-response in building heating systems: A Model Predictive Control approach," Applied Energy, Elsevier, vol. 168(C), pages 159-170.
    13. Vázquez-Canteli, José R. & Nagy, Zoltán, 2019. "Reinforcement learning for demand response: A review of algorithms and modeling techniques," Applied Energy, Elsevier, vol. 235(C), pages 1072-1089.
    14. Du, Yan & Zandi, Helia & Kotevska, Olivera & Kurte, Kuldeep & Munk, Jeffery & Amasyali, Kadir & Mckee, Evan & Li, Fangxing, 2021. "Intelligent multi-zone residential HVAC control strategy based on deep reinforcement learning," Applied Energy, Elsevier, vol. 281(C).
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Qiu, Dawei & Wang, Yi & Wang, Junkai & Jiang, Chuanwen & Strbac, Goran, 2023. "Personalized retail pricing design for smart metering consumers in electricity market," Applied Energy, Elsevier, vol. 348(C).
    2. Xie, Jiahan & Ajagekar, Akshay & You, Fengqi, 2023. "Multi-Agent attention-based deep reinforcement learning for demand response in grid-responsive buildings," Applied Energy, Elsevier, vol. 342(C).
    3. Eduardo J. Salazar & Mauro Jurado & Mauricio E. Samper, 2023. "Reinforcement Learning-Based Pricing and Incentive Strategy for Demand Response in Smart Grids," Energies, MDPI, vol. 16(3), pages 1-33, February.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    2. Pinto, Giuseppe & Kathirgamanathan, Anjukan & Mangina, Eleni & Finn, Donal P. & Capozzoli, Alfonso, 2022. "Enhancing energy management in grid-interactive buildings: A comparison among cooperative and coordinated architectures," Applied Energy, Elsevier, vol. 310(C).
    3. Antonopoulos, Ioannis & Robu, Valentin & Couraud, Benoit & Kirli, Desen & Norbu, Sonam & Kiprakis, Aristides & Flynn, David & Elizondo-Gonzalez, Sergio & Wattam, Steve, 2020. "Artificial intelligence and machine learning approaches to energy demand-side response: A systematic review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 130(C).
    4. Wang, Yi & Qiu, Dawei & Sun, Mingyang & Strbac, Goran & Gao, Zhiwei, 2023. "Secure energy management of multi-energy microgrid: A physical-informed safe reinforcement learning approach," Applied Energy, Elsevier, vol. 335(C).
    5. Homod, Raad Z. & Togun, Hussein & Kadhim Hussein, Ahmed & Noraldeen Al-Mousawi, Fadhel & Yaseen, Zaher Mundher & Al-Kouz, Wael & Abd, Haider J. & Alawi, Omer A. & Goodarzi, Marjan & Hussein, Omar A., 2022. "Dynamics analysis of a novel hybrid deep clustering for unsupervised learning by reinforcement of multi-agent to energy saving in intelligent buildings," Applied Energy, Elsevier, vol. 313(C).
    6. Zheng, Lingwei & Wu, Hao & Guo, Siqi & Sun, Xinyu, 2023. "Real-time dispatch of an integrated energy system based on multi-stage reinforcement learning with an improved action-choosing strategy," Energy, Elsevier, vol. 277(C).
    7. Kong, Xiangyu & Kong, Deqian & Yao, Jingtao & Bai, Linquan & Xiao, Jie, 2020. "Online pricing of demand response based on long short-term memory and reinforcement learning," Applied Energy, Elsevier, vol. 271(C).
    8. Xie, Jiahan & Ajagekar, Akshay & You, Fengqi, 2023. "Multi-Agent attention-based deep reinforcement learning for demand response in grid-responsive buildings," Applied Energy, Elsevier, vol. 342(C).
    9. Lu, Renzhi & Bai, Ruichang & Huang, Yuan & Li, Yuting & Jiang, Junhui & Ding, Yuemin, 2021. "Data-driven real-time price-based demand response for industrial facilities energy management," Applied Energy, Elsevier, vol. 283(C).
    10. Qiu, Dawei & Dong, Zihang & Zhang, Xi & Wang, Yi & Strbac, Goran, 2022. "Safe reinforcement learning for real-time automatic control in a smart energy-hub," Applied Energy, Elsevier, vol. 309(C).
    11. Zeng, Huibin & Shao, Bilin & Dai, Hongbin & Tian, Ning & Zhao, Wei, 2023. "Incentive-based demand response strategies for natural gas considering carbon emissions and load volatility," Applied Energy, Elsevier, vol. 348(C).
    12. Eduardo J. Salazar & Mauro Jurado & Mauricio E. Samper, 2023. "Reinforcement Learning-Based Pricing and Incentive Strategy for Demand Response in Smart Grids," Energies, MDPI, vol. 16(3), pages 1-33, February.
    13. Perera, A.T.D. & Kamalaruban, Parameswaran, 2021. "Applications of reinforcement learning in energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 137(C).
    14. Lu, Renzhi & Li, Yi-Chang & Li, Yuting & Jiang, Junhui & Ding, Yuemin, 2020. "Multi-agent deep reinforcement learning based demand response for discrete manufacturing systems energy management," Applied Energy, Elsevier, vol. 276(C).
    15. Zhang, Xiongfeng & Lu, Renzhi & Jiang, Junhui & Hong, Seung Ho & Song, Won Seok, 2021. "Testbed implementation of reinforcement learning-based demand response energy management system," Applied Energy, Elsevier, vol. 297(C).
    16. Wang, Yi & Qiu, Dawei & Strbac, Goran, 2022. "Multi-agent deep reinforcement learning for resilience-driven routing and scheduling of mobile energy storage systems," Applied Energy, Elsevier, vol. 310(C).
    17. Qiu, Dawei & Wang, Yi & Hua, Weiqi & Strbac, Goran, 2023. "Reinforcement learning for electric vehicle applications in power systems:A critical review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 173(C).
    18. Tsoumalis, Georgios I. & Bampos, Zafeirios N. & Biskas, Pandelis N. & Keranidis, Stratos D. & Symeonidis, Polychronis A. & Voulgarakis, Dimitrios K., 2022. "A novel system for providing explicit demand response from domestic natural gas boilers," Applied Energy, Elsevier, vol. 317(C).
    19. Correa-Jullian, Camila & López Droguett, Enrique & Cardemil, José Miguel, 2020. "Operation scheduling in a solar thermal system: A reinforcement learning-based framework," Applied Energy, Elsevier, vol. 268(C).
    20. Ibrahim, Muhammad Sohail & Dong, Wei & Yang, Qiang, 2020. "Machine learning driven smart electric power systems: Current trends and new perspectives," Applied Energy, Elsevier, vol. 272(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:324:y:2022:i:c:s0306261922009850. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.