IDEAS home Printed from https://ideas.repec.org/a/gam/jeners/v16y2023i10p4143-d1149106.html
   My bibliography  Save this article

Review of Deep Reinforcement Learning and Its Application in Modern Renewable Power System Control

Author

Listed:
  • Qingyan Li

    (Hubei Engineering and Technology Research Center for AC/DC Intelligent Distribution Network, School of Electrical Engineering and Automation, Wuhan University, Wuhan 430072, China)

  • Tao Lin

    (Hubei Engineering and Technology Research Center for AC/DC Intelligent Distribution Network, School of Electrical Engineering and Automation, Wuhan University, Wuhan 430072, China)

  • Qianyi Yu

    (Faculty of Information Technology, Monash University, Melbourne, VIC 3800, Australia)

  • Hui Du

    (Hubei Engineering and Technology Research Center for AC/DC Intelligent Distribution Network, School of Electrical Engineering and Automation, Wuhan University, Wuhan 430072, China)

  • Jun Li

    (Hubei Engineering and Technology Research Center for AC/DC Intelligent Distribution Network, School of Electrical Engineering and Automation, Wuhan University, Wuhan 430072, China)

  • Xiyue Fu

    (Hubei Engineering and Technology Research Center for AC/DC Intelligent Distribution Network, School of Electrical Engineering and Automation, Wuhan University, Wuhan 430072, China)

Abstract

With the ongoing transformation of electricity generation from large thermal power plants to smaller renewable energy sources (RESs), such as wind and solar, modern renewable power systems need to address the new challenge of the increasing uncertainty and complexity caused by the deployment of electricity generation from RESs and the integration of flexible loads and new technologies. At present, a high volume of available data is provided by smart grid technologies, energy management systems (EMSs), and wide-area measurement systems (WAMSs), bringing more opportunities for data-driven methods. Deep reinforcement learning (DRL), as one of the state-of-the-art data-driven methods, is applied to learn optimal or near-optimal control policy by formulating the power system as a Markov decision process (MDP). This paper reviews the recent DRL algorithms and the existing work of operational control or emergency control based on DRL algorithms for modern renewable power systems and control-related problems for small signal stability. The fundamentals of DRL and several commonly used DRL algorithms are briefly introduced. Current issues and expected future directions are discussed.

Suggested Citation

  • Qingyan Li & Tao Lin & Qianyi Yu & Hui Du & Jun Li & Xiyue Fu, 2023. "Review of Deep Reinforcement Learning and Its Application in Modern Renewable Power System Control," Energies, MDPI, vol. 16(10), pages 1-23, May.
  • Handle: RePEc:gam:jeners:v:16:y:2023:i:10:p:4143-:d:1149106
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1996-1073/16/10/4143/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1996-1073/16/10/4143/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Ardi Tampuu & Tambet Matiisen & Dorian Kodelja & Ilya Kuzovkin & Kristjan Korjus & Juhan Aru & Jaan Aru & Raul Vicente, 2017. "Multiagent cooperation and competition with deep reinforcement learning," PLOS ONE, Public Library of Science, vol. 12(4), pages 1-15, April.
    2. Kou, Peng & Liang, Deliang & Wang, Chen & Wu, Zihao & Gao, Lin, 2020. "Safe deep reinforcement learning-based constrained optimal control scheme for active distribution networks," Applied Energy, Elsevier, vol. 264(C).
    3. Cao, Di & Zhao, Junbo & Hu, Weihao & Ding, Fei & Yu, Nanpeng & Huang, Qi & Chen, Zhe, 2022. "Model-free voltage control of active distribution system with PVs using surrogate model-based deep reinforcement learning," Applied Energy, Elsevier, vol. 306(PA).
    4. Li, Jiawen & Yu, Tao & Zhang, Xiaoshun, 2022. "Coordinated load frequency control of multi-area integrated energy system using multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 306(PA).
    5. Aien, Morteza & Hajebrahimi, Ali & Fotuhi-Firuzabad, Mahmud, 2016. "A comprehensive review on uncertainty modeling techniques in power system studies," Renewable and Sustainable Energy Reviews, Elsevier, vol. 57(C), pages 1077-1089.
    6. Oriol Vinyals & Igor Babuschkin & Wojciech M. Czarnecki & Michaël Mathieu & Andrew Dudzik & Junyoung Chung & David H. Choi & Richard Powell & Timo Ewalds & Petko Georgiev & Junhyuk Oh & Dan Horgan & M, 2019. "Grandmaster level in StarCraft II using multi-agent reinforcement learning," Nature, Nature, vol. 575(7782), pages 350-354, November.
    7. Zhang, Guozhou & Hu, Weihao & Cao, Di & Huang, Qi & Chen, Zhe & Blaabjerg, Frede, 2021. "A novel deep reinforcement learning enabled sparsity promoting adaptive control method to improve the stability of power systems with wind energy penetration," Renewable Energy, Elsevier, vol. 178(C), pages 363-376.
    8. Jean-François Toubeau & Bashir Bakhshideh Zad & Martin Hupez & Zacharie De Grève & François Vallée, 2020. "Deep Reinforcement Learning-Based Voltage Control to Deal with Model Uncertainties in Distribution Networks," Energies, MDPI, vol. 13(15), pages 1-15, August.
    9. Jing Zhang & Yiqi Li & Zhi Wu & Chunyan Rong & Tao Wang & Zhang Zhang & Suyang Zhou, 2021. "Deep-Reinforcement-Learning-Based Two-Timescale Voltage Control for Distribution Systems," Energies, MDPI, vol. 14(12), pages 1-15, June.
    10. David Silver & Julian Schrittwieser & Karen Simonyan & Ioannis Antonoglou & Aja Huang & Arthur Guez & Thomas Hubert & Lucas Baker & Matthew Lai & Adrian Bolton & Yutian Chen & Timothy Lillicrap & Fan , 2017. "Mastering the game of Go without human knowledge," Nature, Nature, vol. 550(7676), pages 354-359, October.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Ekaterina V. Orlova, 2023. "Dynamic Regimes for Corporate Human Capital Development Used Reinforcement Learning Methods," Mathematics, MDPI, vol. 11(18), pages 1-22, September.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Yang, Zhengzhi & Zheng, Lei & Perc, Matjaž & Li, Yumeng, 2024. "Interaction state Q-learning promotes cooperation in the spatial prisoner's dilemma game," Applied Mathematics and Computation, Elsevier, vol. 463(C).
    2. Se-Heon Lim & Sung-Guk Yoon, 2022. "Dynamic DNR and Solar PV Smart Inverter Control Scheme Using Heterogeneous Multi-Agent Deep Reinforcement Learning," Energies, MDPI, vol. 15(23), pages 1-18, December.
    3. Young Joon Park & Yoon Sang Cho & Seoung Bum Kim, 2019. "Multi-agent reinforcement learning with approximate model learning for competitive games," PLOS ONE, Public Library of Science, vol. 14(9), pages 1-20, September.
    4. Li, Wenqing & Ni, Shaoquan, 2022. "Train timetabling with the general learning environment and multi-agent deep reinforcement learning," Transportation Research Part B: Methodological, Elsevier, vol. 157(C), pages 230-251.
    5. Michael Curry & Alexander Trott & Soham Phade & Yu Bai & Stephan Zheng, 2022. "Analyzing Micro-Founded General Equilibrium Models with Many Agents using Deep Reinforcement Learning," Papers 2201.01163, arXiv.org, revised Feb 2022.
    6. Dong Liu & Feng Xiao & Jian Luo & Fan Yang, 2023. "Deep Reinforcement Learning-Based Holding Control for Bus Bunching under Stochastic Travel Time and Demand," Sustainability, MDPI, vol. 15(14), pages 1-18, July.
    7. Perera, A.T.D. & Kamalaruban, Parameswaran, 2021. "Applications of reinforcement learning in energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 137(C).
    8. Zhang, Bin & Hu, Weihao & Xu, Xiao & Li, Tao & Zhang, Zhenyuan & Chen, Zhe, 2022. "Physical-model-free intelligent energy management for a grid-connected hybrid wind-microturbine-PV-EV energy system via deep reinforcement learning approach," Renewable Energy, Elsevier, vol. 200(C), pages 433-448.
    9. Bossert, Leonie & Hagendorff, Thilo, 2021. "Animals and AI. The role of animals in AI research and application – An overview and ethical evaluation," Technology in Society, Elsevier, vol. 67(C).
    10. Harrold, Daniel J.B. & Cao, Jun & Fan, Zhong, 2022. "Renewable energy integration and microgrid energy trading using multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 318(C).
    11. Weifan Long & Taixian Hou & Xiaoyi Wei & Shichao Yan & Peng Zhai & Lihua Zhang, 2023. "A Survey on Population-Based Deep Reinforcement Learning," Mathematics, MDPI, vol. 11(10), pages 1-17, May.
    12. Xuan-Kun Li & Jian-Xu Ma & Xiang-Yu Li & Jun-Jie Hu & Chuan-Yang Ding & Feng-Kai Han & Xiao-Min Guo & Xi Tan & Xian-Min Jin, 2024. "High-efficiency reinforcement learning with hybrid architecture photonic integrated circuit," Nature Communications, Nature, vol. 15(1), pages 1-10, December.
    13. Jude Suchithra & Duane Robinson & Amin Rajabi, 2023. "Hosting Capacity Assessment Strategies and Reinforcement Learning Methods for Coordinated Voltage Control in Electricity Distribution Networks: A Review," Energies, MDPI, vol. 16(5), pages 1-28, March.
    14. Yuling Huang & Xiaoping Lu & Chujin Zhou & Yunlin Song, 2023. "DADE-DQN: Dual Action and Dual Environment Deep Q-Network for Enhancing Stock Trading Strategy," Mathematics, MDPI, vol. 11(17), pages 1-27, August.
    15. Grover-Silva, Etta & Heleno, Miguel & Mashayekh, Salman & Cardoso, Gonçalo & Girard, Robin & Kariniotakis, George, 2018. "A stochastic optimal power flow for scheduling flexible resources in microgrids operation," Applied Energy, Elsevier, vol. 229(C), pages 201-208.
    16. Oh, Seok Hwa & Yoon, Yong Tae & Kim, Seung Wan, 2020. "Online reconfiguration scheme of self-sufficient distribution network based on a reinforcement learning approach," Applied Energy, Elsevier, vol. 280(C).
    17. Daníelsson, Jón & Macrae, Robert & Uthemann, Andreas, 2022. "Artificial intelligence and systemic risk," Journal of Banking & Finance, Elsevier, vol. 140(C).
    18. Wang, Yi & Qiu, Dawei & Sun, Mingyang & Strbac, Goran & Gao, Zhiwei, 2023. "Secure energy management of multi-energy microgrid: A physical-informed safe reinforcement learning approach," Applied Energy, Elsevier, vol. 335(C).
    19. Zhu, Xingxu & Hou, Xiangchen & Li, Junhui & Yan, Gangui & Li, Cuiping & Wang, Dongbo, 2023. "Distributed online prediction optimization algorithm for distributed energy resources considering the multi-periods optimal operation," Applied Energy, Elsevier, vol. 348(C).
    20. Yin, Linfei & He, Xiaoyu, 2023. "Artificial emotional deep Q learning for real-time smart voltage control of cyber-physical social power systems," Energy, Elsevier, vol. 273(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jeners:v:16:y:2023:i:10:p:4143-:d:1149106. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.