IDEAS home Printed from https://ideas.repec.org/a/gam/jeners/v16y2023i4p1608-d1059125.html
   My bibliography  Save this article

Reinforcement Learning-Based Intelligent Control Strategies for Optimal Power Management in Advanced Power Distribution Systems: A Survey

Author

Listed:
  • Mudhafar Al-Saadi

    (School of Computing, Engineering, and Digital Technologies, Teesside University, Middlesbrough TS1 3BX, UK)

  • Maher Al-Greer

    (School of Computing, Engineering, and Digital Technologies, Teesside University, Middlesbrough TS1 3BX, UK)

  • Michael Short

    (School of Computing, Engineering, and Digital Technologies, Teesside University, Middlesbrough TS1 3BX, UK)

Abstract

Intelligent energy management in renewable-based power distribution applications, such as microgrids, smart grids, smart buildings, and EV systems, is becoming increasingly important in the context of the transition toward the decentralization, digitalization, and decarbonization of energy networks. Arguably, many challenges can be overcome, and benefits leveraged, in this transition by the adoption of intelligent autonomous computer-based decision-making through the introduction of smart technologies, specifically artificial intelligence. Unlike other numerical or soft computing optimization methods, the control based on artificial intelligence allows the decentralized power units to collaborate in making the best decision of fulfilling the administrator’s needs, rather than only a primitive decentralization based only on the division of tasks. Among the smart approaches, reinforcement learning stands as the most relevant and successful, particularly in power distribution management applications. The reason is it does not need an accurate model for attaining an optimized solution regarding the interaction with the environment. Accordingly, there is an ongoing need to accomplish a clear, up-to-date, vision of the development level, especially with the lack of recent comprehensive detailed reviews of this vitally important research field. Therefore, this paper fulfills the need and presents a comprehensive review of the state-of-the-art successful and distinguished intelligent control strategies-based RL in optimizing the management of power flow and distribution. Wherein extensive importance is given to the classification of the literature on emerging strategies, the proposals based on RL multiagent, and the multiagent primary secondary control of managing power flow in micro and smart grids, particularly the energy storage. As a result, 126 of the most relevant, recent, and non-incremental have been reviewed and put into relevant categories. Furthermore, salient features have been identified of the major positive and negative, of each selection.

Suggested Citation

  • Mudhafar Al-Saadi & Maher Al-Greer & Michael Short, 2023. "Reinforcement Learning-Based Intelligent Control Strategies for Optimal Power Management in Advanced Power Distribution Systems: A Survey," Energies, MDPI, vol. 16(4), pages 1-38, February.
  • Handle: RePEc:gam:jeners:v:16:y:2023:i:4:p:1608-:d:1059125
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1996-1073/16/4/1608/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1996-1073/16/4/1608/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Mudhafar Al-Saadi & Maher Al-Greer & Michael Short, 2021. "Strategies for Controlling Microgrid Networks with Energy Storage Systems: A Review," Energies, MDPI, vol. 14(21), pages 1-45, November.
    2. Ganesh, Akhil Hannegudda & Xu, Bin, 2022. "A review of reinforcement learning based energy management systems for electrified powertrains: Progress, challenge, and potential solution," Renewable and Sustainable Energy Reviews, Elsevier, vol. 154(C).
    3. Lian, Renzong & Peng, Jiankun & Wu, Yuankai & Tan, Huachun & Zhang, Hailong, 2020. "Rule-interposing deep reinforcement learning based energy management strategy for power-split hybrid electric vehicle," Energy, Elsevier, vol. 197(C).
    4. Sun, Fangyuan & Kong, Xiangyu & Wu, Jianzhong & Gao, Bixuan & Chen, Ke & Lu, Ning, 2022. "DSM pricing method based on A3C and LSTM under cloud-edge environment," Applied Energy, Elsevier, vol. 315(C).
    5. Han, Kunlun & Yang, Kai & Yin, Linfei, 2022. "Lightweight actor-critic generative adversarial networks for real-time smart generation control of microgrids," Applied Energy, Elsevier, vol. 317(C).
    6. Bo, Lin & Han, Lijin & Xiang, Changle & Liu, Hui & Ma, Tian, 2022. "A Q-learning fuzzy inference system based online energy management strategy for off-road hybrid electric vehicles," Energy, Elsevier, vol. 252(C).
    7. Zhou, Jianhao & Xue, Yuan & Xu, Da & Li, Chaoxiong & Zhao, Wanzhong, 2022. "Self-learning energy management strategy for hybrid electric vehicle via curiosity-inspired asynchronous deep reinforcement learning," Energy, Elsevier, vol. 242(C).
    8. Zhang, Junjie & Jia, Rongwen & Yang, Hangjun & Dong, Kangyin, 2022. "Does electric vehicle promotion in the public sector contribute to urban transport carbon emissions reduction?," Transport Policy, Elsevier, vol. 125(C), pages 151-163.
    9. Tao Wu & Yanghong Xia & Liang Wang & Wei Wei, 2020. "Multiagent Based Distributed Control with Time-Oriented SoC Balancing Method for DC Microgrid," Energies, MDPI, vol. 13(11), pages 1-17, June.
    10. Kou, Peng & Liang, Deliang & Wang, Chen & Wu, Zihao & Gao, Lin, 2020. "Safe deep reinforcement learning-based constrained optimal control scheme for active distribution networks," Applied Energy, Elsevier, vol. 264(C).
    11. Pannee Suanpang & Pitchaya Jamjuntr & Kittisak Jermsittiparsert & Phuripoj Kaewyong, 2022. "Autonomous Energy Management by Applying Deep Q-Learning to Enhance Sustainability in Smart Tourism Cities," Energies, MDPI, vol. 15(5), pages 1-13, March.
    12. Hoda Sorouri & Arman Oshnoei & Mateja Novak & Frede Blaabjerg & Amjad Anvari-Moghaddam, 2022. "Learning-Based Model Predictive Control of DC-DC Buck Converters in DC Microgrids: A Multi-Agent Deep Reinforcement Learning Approach," Energies, MDPI, vol. 15(15), pages 1-21, July.
    13. Heidari, Amirreza & Maréchal, François & Khovalyg, Dolaana, 2022. "An occupant-centric control framework for balancing comfort, energy use and hygiene in hot water systems: A model-free reinforcement learning approach," Applied Energy, Elsevier, vol. 312(C).
    14. Li, Chuang & Li, Guojie & Wang, Keyou & Han, Bei, 2022. "A multi-energy load forecasting method based on parallel architecture CNN-GRU and transfer learning for data deficient integrated energy systems," Energy, Elsevier, vol. 259(C).
    15. Du, Guodong & Zou, Yuan & Zhang, Xudong & Liu, Teng & Wu, Jinlong & He, Dingbo, 2020. "Deep reinforcement learning based energy management for a hybrid electric vehicle," Energy, Elsevier, vol. 201(C).
    16. Chen, Zheng & Gu, Hongji & Shen, Shiquan & Shen, Jiangwei, 2022. "Energy management strategy for power-split plug-in hybrid electric vehicle based on MPC and double Q-learning," Energy, Elsevier, vol. 245(C).
    17. Wu, Jiahui & Wang, Jidong & Kong, Xiangyu, 2022. "Strategic bidding in a competitive electricity market: An intelligent method using Multi-Agent Transfer Learning based on reinforcement learning," Energy, Elsevier, vol. 256(C).
    18. Zhou, Quan & Li, Ji & Shuai, Bin & Williams, Huw & He, Yinglong & Li, Ziyang & Xu, Hongming & Yan, Fuwu, 2019. "Multi-step reinforcement learning for model-free predictive energy management of an electrified off-highway vehicle," Applied Energy, Elsevier, vol. 255(C).
    19. Du, Yan & Zandi, Helia & Kotevska, Olivera & Kurte, Kuldeep & Munk, Jeffery & Amasyali, Kadir & Mckee, Evan & Li, Fangxing, 2021. "Intelligent multi-zone residential HVAC control strategy based on deep reinforcement learning," Applied Energy, Elsevier, vol. 281(C).
    20. Kapil Deshpande & Philipp Möhl & Alexander Hämmerle & Georg Weichhart & Helmut Zörrer & Andreas Pichler, 2022. "Energy Management Simulation with Multi-Agent Reinforcement Learning: An Approach to Achieve Reliability and Resilience," Energies, MDPI, vol. 15(19), pages 1-35, October.
    21. Du, Guodong & Zou, Yuan & Zhang, Xudong & Guo, Lingxiong & Guo, Ningyuan, 2022. "Energy management for a hybrid electric vehicle based on prioritized deep reinforcement learning framework," Energy, Elsevier, vol. 241(C).
    22. Homod, Raad Z. & Togun, Hussein & Kadhim Hussein, Ahmed & Noraldeen Al-Mousawi, Fadhel & Yaseen, Zaher Mundher & Al-Kouz, Wael & Abd, Haider J. & Alawi, Omer A. & Goodarzi, Marjan & Hussein, Omar A., 2022. "Dynamics analysis of a novel hybrid deep clustering for unsupervised learning by reinforcement of multi-agent to energy saving in intelligent buildings," Applied Energy, Elsevier, vol. 313(C).
    23. Shen, Rendong & Zhong, Shengyuan & Wen, Xin & An, Qingsong & Zheng, Ruifan & Li, Yang & Zhao, Jun, 2022. "Multi-agent deep reinforcement learning optimization framework for building energy system with renewable energy," Applied Energy, Elsevier, vol. 312(C).
    24. Wu, Tao & Wang, Jianhui & Lu, Xiaonan & Du, Yuhua, 2022. "AC/DC hybrid distribution network reconfiguration with microgrid formation using multi-agent soft actor-critic," Applied Energy, Elsevier, vol. 307(C).
    25. Sun, Wenjing & Zou, Yuan & Zhang, Xudong & Guo, Ningyuan & Zhang, Bin & Du, Guodong, 2022. "High robustness energy management strategy of hybrid electric vehicle based on improved soft actor-critic deep reinforcement learning," Energy, Elsevier, vol. 258(C).
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Andrzej Ożadowicz & Gabriela Walczyk, 2023. "Energy Performance and Control Strategy for Dynamic Façade with Perovskite PV Panels—Technical Analysis and Case Study," Energies, MDPI, vol. 16(9), pages 1-23, April.
    2. Marco Bindi & Maria Cristina Piccirilli & Antonio Luchetta & Francesco Grasso, 2023. "A Comprehensive Review of Fault Diagnosis and Prognosis Techniques in High Voltage and Medium Voltage Electrical Power Lines," Energies, MDPI, vol. 16(21), pages 1-37, October.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Daniel Egan & Qilun Zhu & Robert Prucka, 2023. "A Review of Reinforcement Learning-Based Powertrain Controllers: Effects of Agent Selection for Mixed-Continuity Control and Reward Formulation," Energies, MDPI, vol. 16(8), pages 1-31, April.
    2. Dong, Peng & Zhao, Junwei & Liu, Xuewu & Wu, Jian & Xu, Xiangyang & Liu, Yanfang & Wang, Shuhan & Guo, Wei, 2022. "Practical application of energy management strategy for hybrid electric vehicles based on intelligent and connected technologies: Development stages, challenges, and future trends," Renewable and Sustainable Energy Reviews, Elsevier, vol. 170(C).
    3. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    4. Miranda, Matheus H.R. & Silva, Fabrício L. & Lourenço, Maria A.M. & Eckert, Jony J. & Silva, Ludmila C.A., 2022. "Vehicle drivetrain and fuzzy controller optimization using a planar dynamics simulation based on a real-world driving cycle," Energy, Elsevier, vol. 257(C).
    5. Fuwu Yan & Jinhai Wang & Changqing Du & Min Hua, 2022. "Multi-Objective Energy Management Strategy for Hybrid Electric Vehicles Based on TD3 with Non-Parametric Reward Function," Energies, MDPI, vol. 16(1), pages 1-17, December.
    6. Zhu, Tao & Wills, Richard G.A. & Lot, Roberto & Ruan, Haijun & Jiang, Zhihao, 2021. "Adaptive energy management of a battery-supercapacitor energy storage system for electric vehicles based on flexible perception and neural network fitting," Applied Energy, Elsevier, vol. 292(C).
    7. Zhang, Bin & Hu, Weihao & Ghias, Amer M.Y.M. & Xu, Xiao & Chen, Zhe, 2022. "Multi-agent deep reinforcement learning-based coordination control for grid-aware multi-buildings," Applied Energy, Elsevier, vol. 328(C).
    8. Penghui Qiang & Peng Wu & Tao Pan & Huaiquan Zang, 2021. "Real-Time Approximate Equivalent Consumption Minimization Strategy Based on the Single-Shaft Parallel Hybrid Powertrain," Energies, MDPI, vol. 14(23), pages 1-22, November.
    9. Qi, Chunyang & Zhu, Yiwen & Song, Chuanxue & Yan, Guangfu & Xiao, Feng & Da wang, & Zhang, Xu & Cao, Jingwei & Song, Shixin, 2022. "Hierarchical reinforcement learning based energy management strategy for hybrid electric vehicle," Energy, Elsevier, vol. 238(PA).
    10. Daeil Lee & Seoryong Koo & Inseok Jang & Jonghyun Kim, 2022. "Comparison of Deep Reinforcement Learning and PID Controllers for Automatic Cold Shutdown Operation," Energies, MDPI, vol. 15(8), pages 1-25, April.
    11. Robert Jane & Tae Young Kim & Samantha Rose & Emily Glass & Emilee Mossman & Corey James, 2022. "Developing AI/ML Based Predictive Capabilities for a Compression Ignition Engine Using Pseudo Dynamometer Data," Energies, MDPI, vol. 15(21), pages 1-49, October.
    12. Zhengyu Yao & Hwan-Sik Yoon & Yang-Ki Hong, 2023. "Control of Hybrid Electric Vehicle Powertrain Using Offline-Online Hybrid Reinforcement Learning," Energies, MDPI, vol. 16(2), pages 1-18, January.
    13. Tang, Wenbin & Wang, Yaqian & Jiao, Xiaohong & Ren, Lina, 2023. "Hierarchical energy management strategy based on adaptive dynamic programming for hybrid electric vehicles in car-following scenarios," Energy, Elsevier, vol. 265(C).
    14. Yao, Yongming & Wang, Jie & Zhou, Zhicong & Li, Hang & Liu, Huiying & Li, Tianyu, 2023. "Grey Markov prediction-based hierarchical model predictive control energy management for fuel cell/battery hybrid unmanned aerial vehicles," Energy, Elsevier, vol. 262(PA).
    15. Guo, Ningyuan & Zhang, Xudong & Zou, Yuan & Guo, Lingxiong & Du, Guodong, 2021. "Real-time predictive energy management of plug-in hybrid electric vehicles for coordination of fuel economy and battery degradation," Energy, Elsevier, vol. 214(C).
    16. Marouane Adnane & Ahmed Khoumsi & João Pedro F. Trovão, 2023. "Efficient Management of Energy Consumption of Electric Vehicles Using Machine Learning—A Systematic and Comprehensive Survey," Energies, MDPI, vol. 16(13), pages 1-39, June.
    17. Cui, Wei & Cui, Naxin & Li, Tao & Cui, Zhongrui & Du, Yi & Zhang, Chenghui, 2022. "An efficient multi-objective hierarchical energy management strategy for plug-in hybrid electric vehicle in connected scenario," Energy, Elsevier, vol. 257(C).
    18. Liu, Bo & Sun, Chao & Wang, Bo & Liang, Weiqiang & Ren, Qiang & Li, Junqiu & Sun, Fengchun, 2022. "Bi-level convex optimization of eco-driving for connected Fuel Cell Hybrid Electric Vehicles through signalized intersections," Energy, Elsevier, vol. 252(C).
    19. Hu, Dong & Xie, Hui & Song, Kang & Zhang, Yuanyuan & Yan, Long, 2023. "An apprenticeship-reinforcement learning scheme based on expert demonstrations for energy management strategy of hybrid electric vehicles," Applied Energy, Elsevier, vol. 342(C).
    20. Connor Scott & Mominul Ahsan & Alhussein Albarbar, 2021. "Machine Learning Based Vehicle to Grid Strategy for Improving the Energy Performance of Public Buildings," Sustainability, MDPI, vol. 13(7), pages 1-22, April.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jeners:v:16:y:2023:i:4:p:1608-:d:1059125. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.