IDEAS home Printed from https://ideas.repec.org/a/gam/jeners/v15y2022i19p6920-d921288.html
   My bibliography  Save this article

Deep Reinforcement Learning-Based Approach for Autonomous Power Flow Control Using Only Topology Changes

Author

Listed:
  • Ivana Damjanović

    (Faculty of Electrical Engineering and Computing, University of Zagreb, 10000 Zagreb, Croatia)

  • Ivica Pavić

    (Faculty of Electrical Engineering and Computing, University of Zagreb, 10000 Zagreb, Croatia)

  • Mate Puljiz

    (Faculty of Electrical Engineering and Computing, University of Zagreb, 10000 Zagreb, Croatia)

  • Mario Brcic

    (Faculty of Electrical Engineering and Computing, University of Zagreb, 10000 Zagreb, Croatia)

Abstract

With the increasing complexity of power system structures and the increasing penetration of renewable energy, driven primarily by the need for decarbonization, power system operation and control become challenging. Changes are resulting in an enormous increase in system complexity, wherein the number of active control points in the grid is too high to be managed manually and provide an opportunity for the application of artificial intelligence technology in the power system. For power flow control, many studies have focused on using generation redispatching, load shedding, or demand side management flexibilities. This paper presents a novel reinforcement learning (RL)-based approach for the secure operation of power system via autonomous topology changes considering various constraints. The proposed agent learns from scratch to master power flow control purely from data. It can make autonomous topology changes according to current system conditions to support grid operators in making effective preventive control actions. The state-of-the-art RL algorithm—namely, dueling double deep Q-network with prioritized replay—is adopted to train effective agent for achieving the desired performance. The IEEE 14-bus system is selected to demonstrate the effectiveness and promising performance of the proposed agent controlling power network for up to a month with only nine actions affecting substation configuration.

Suggested Citation

  • Ivana Damjanović & Ivica Pavić & Mate Puljiz & Mario Brcic, 2022. "Deep Reinforcement Learning-Based Approach for Autonomous Power Flow Control Using Only Topology Changes," Energies, MDPI, vol. 15(19), pages 1-16, September.
  • Handle: RePEc:gam:jeners:v:15:y:2022:i:19:p:6920-:d:921288
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1996-1073/15/19/6920/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1996-1073/15/19/6920/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Zhang, Xiongfeng & Lu, Renzhi & Jiang, Junhui & Hong, Seung Ho & Song, Won Seok, 2021. "Testbed implementation of reinforcement learning-based demand response energy management system," Applied Energy, Elsevier, vol. 297(C).
    2. Oleh Lukianykhin & Tetiana Bogodorova, 2021. "Voltage Control-Based Ancillary Service Using Deep Reinforcement Learning," Energies, MDPI, vol. 14(8), pages 1-22, April.
    3. Lu, Renzhi & Hong, Seung Ho, 2019. "Incentive-based demand response for smart grid with reinforcement learning and deep neural network," Applied Energy, Elsevier, vol. 236(C), pages 937-949.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Zhencheng Fan & Zheng Yan & Shiping Wen, 2023. "Deep Learning and Artificial Intelligence in Sustainability: A Review of SDGs, Renewable Energy, and Environmental Health," Sustainability, MDPI, vol. 15(18), pages 1-20, September.
    2. Hubert Szczepaniuk & Edyta Karolina Szczepaniuk, 2022. "Applications of Artificial Intelligence Algorithms in the Energy Sector," Energies, MDPI, vol. 16(1), pages 1-24, December.
    3. Mahdi Khodayar & Jacob Regan, 2023. "Deep Neural Networks in Power Systems: A Review," Energies, MDPI, vol. 16(12), pages 1-38, June.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Lu, Renzhi & Bai, Ruichang & Ding, Yuemin & Wei, Min & Jiang, Junhui & Sun, Mingyang & Xiao, Feng & Zhang, Hai-Tao, 2021. "A hybrid deep learning-based online energy management scheme for industrial microgrid," Applied Energy, Elsevier, vol. 304(C).
    2. Xie, Jiahan & Ajagekar, Akshay & You, Fengqi, 2023. "Multi-Agent attention-based deep reinforcement learning for demand response in grid-responsive buildings," Applied Energy, Elsevier, vol. 342(C).
    3. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    4. Zhang, Yang & Yang, Qingyu & Li, Donghe & An, Dou, 2022. "A reinforcement and imitation learning method for pricing strategy of electricity retailer with customers’ flexibility," Applied Energy, Elsevier, vol. 323(C).
    5. Ibrahim, Muhammad Sohail & Dong, Wei & Yang, Qiang, 2020. "Machine learning driven smart electric power systems: Current trends and new perspectives," Applied Energy, Elsevier, vol. 272(C).
    6. Davarzani, Sima & Pisica, Ioana & Taylor, Gareth A. & Munisami, Kevin J., 2021. "Residential Demand Response Strategies and Applications in Active Distribution Network Management," Renewable and Sustainable Energy Reviews, Elsevier, vol. 138(C).
    7. Vo-Van Thanh & Wencong Su & Bin Wang, 2022. "Optimal DC Microgrid Operation with Model Predictive Control-Based Voltage-Dependent Demand Response and Optimal Battery Dispatch," Energies, MDPI, vol. 15(6), pages 1-19, March.
    8. Xu, Fangyuan & Zhu, Weidong & Wang, Yi Fei & Lai, Chun Sing & Yuan, Haoliang & Zhao, Yujia & Guo, Siming & Fu, Zhengxin, 2022. "A new deregulated demand response scheme for load over-shifting city in regulated power market," Applied Energy, Elsevier, vol. 311(C).
    9. Zhu, Ziqing & Hu, Ze & Chan, Ka Wing & Bu, Siqi & Zhou, Bin & Xia, Shiwei, 2023. "Reinforcement learning in deregulated energy market: A comprehensive review," Applied Energy, Elsevier, vol. 329(C).
    10. Kalim Ullah & Sajjad Ali & Taimoor Ahmad Khan & Imran Khan & Sadaqat Jan & Ibrar Ali Shah & Ghulam Hafeez, 2020. "An Optimal Energy Optimization Strategy for Smart Grid Integrated with Renewable Energy Sources and Demand Response Programs," Energies, MDPI, vol. 13(21), pages 1-17, November.
    11. Pinto, Giuseppe & Deltetto, Davide & Capozzoli, Alfonso, 2021. "Data-driven district energy management with surrogate models and deep reinforcement learning," Applied Energy, Elsevier, vol. 304(C).
    12. Qi, Chunyang & Zhu, Yiwen & Song, Chuanxue & Yan, Guangfu & Xiao, Feng & Da wang, & Zhang, Xu & Cao, Jingwei & Song, Shixin, 2022. "Hierarchical reinforcement learning based energy management strategy for hybrid electric vehicle," Energy, Elsevier, vol. 238(PA).
    13. Seongwoo Lee & Joonho Seon & Byungsun Hwang & Soohyun Kim & Youngghyu Sun & Jinyoung Kim, 2024. "Recent Trends and Issues of Energy Management Systems Using Machine Learning," Energies, MDPI, vol. 17(3), pages 1-24, January.
    14. Chao-Chung Hsu & Bi-Hai Jiang & Chun-Cheng Lin, 2023. "A Survey on Recent Applications of Artificial Intelligence and Optimization for Smart Grids in Smart Manufacturing," Energies, MDPI, vol. 16(22), pages 1-15, November.
    15. Jing Zhang & Yiqi Li & Zhi Wu & Chunyan Rong & Tao Wang & Zhang Zhang & Suyang Zhou, 2021. "Deep-Reinforcement-Learning-Based Two-Timescale Voltage Control for Distribution Systems," Energies, MDPI, vol. 14(12), pages 1-15, June.
    16. Li, Zening & Su, Su & Jin, Xiaolong & Chen, Houhe, 2021. "Distributed energy management for active distribution network considering aggregated office buildings," Renewable Energy, Elsevier, vol. 180(C), pages 1073-1087.
    17. Park, Keonwoo & Moon, Ilkyeong, 2022. "Multi-agent deep reinforcement learning approach for EV charging scheduling in a smart grid," Applied Energy, Elsevier, vol. 328(C).
    18. Zhao, Liyuan & Yang, Ting & Li, Wei & Zomaya, Albert Y., 2022. "Deep reinforcement learning-based joint load scheduling for household multi-energy system," Applied Energy, Elsevier, vol. 324(C).
    19. Amit Shewale & Anil Mokhade & Nitesh Funde & Neeraj Dhanraj Bokde, 2022. "A Survey of Efficient Demand-Side Management Techniques for the Residential Appliance Scheduling Problem in Smart Homes," Energies, MDPI, vol. 15(8), pages 1-34, April.
    20. Antonopoulos, Ioannis & Robu, Valentin & Couraud, Benoit & Kirli, Desen & Norbu, Sonam & Kiprakis, Aristides & Flynn, David & Elizondo-Gonzalez, Sergio & Wattam, Steve, 2020. "Artificial intelligence and machine learning approaches to energy demand-side response: A systematic review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 130(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jeners:v:15:y:2022:i:19:p:6920-:d:921288. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.