IDEAS home Printed from https://ideas.repec.org/a/gam/jeners/v16y2023i4p1608-d1059125.html
   My bibliography  Save this article

Reinforcement Learning-Based Intelligent Control Strategies for Optimal Power Management in Advanced Power Distribution Systems: A Survey

Author

Listed:
  • Mudhafar Al-Saadi

    (School of Computing, Engineering, and Digital Technologies, Teesside University, Middlesbrough TS1 3BX, UK)

  • Maher Al-Greer

    (School of Computing, Engineering, and Digital Technologies, Teesside University, Middlesbrough TS1 3BX, UK)

  • Michael Short

    (School of Computing, Engineering, and Digital Technologies, Teesside University, Middlesbrough TS1 3BX, UK)

Abstract

Intelligent energy management in renewable-based power distribution applications, such as microgrids, smart grids, smart buildings, and EV systems, is becoming increasingly important in the context of the transition toward the decentralization, digitalization, and decarbonization of energy networks. Arguably, many challenges can be overcome, and benefits leveraged, in this transition by the adoption of intelligent autonomous computer-based decision-making through the introduction of smart technologies, specifically artificial intelligence. Unlike other numerical or soft computing optimization methods, the control based on artificial intelligence allows the decentralized power units to collaborate in making the best decision of fulfilling the administrator’s needs, rather than only a primitive decentralization based only on the division of tasks. Among the smart approaches, reinforcement learning stands as the most relevant and successful, particularly in power distribution management applications. The reason is it does not need an accurate model for attaining an optimized solution regarding the interaction with the environment. Accordingly, there is an ongoing need to accomplish a clear, up-to-date, vision of the development level, especially with the lack of recent comprehensive detailed reviews of this vitally important research field. Therefore, this paper fulfills the need and presents a comprehensive review of the state-of-the-art successful and distinguished intelligent control strategies-based RL in optimizing the management of power flow and distribution. Wherein extensive importance is given to the classification of the literature on emerging strategies, the proposals based on RL multiagent, and the multiagent primary secondary control of managing power flow in micro and smart grids, particularly the energy storage. As a result, 126 of the most relevant, recent, and non-incremental have been reviewed and put into relevant categories. Furthermore, salient features have been identified of the major positive and negative, of each selection.

Suggested Citation

  • Mudhafar Al-Saadi & Maher Al-Greer & Michael Short, 2023. "Reinforcement Learning-Based Intelligent Control Strategies for Optimal Power Management in Advanced Power Distribution Systems: A Survey," Energies, MDPI, vol. 16(4), pages 1-38, February.
  • Handle: RePEc:gam:jeners:v:16:y:2023:i:4:p:1608-:d:1059125
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1996-1073/16/4/1608/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1996-1073/16/4/1608/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Lian, Renzong & Peng, Jiankun & Wu, Yuankai & Tan, Huachun & Zhang, Hailong, 2020. "Rule-interposing deep reinforcement learning based energy management strategy for power-split hybrid electric vehicle," Energy, Elsevier, vol. 197(C).
    2. Han, Kunlun & Yang, Kai & Yin, Linfei, 2022. "Lightweight actor-critic generative adversarial networks for real-time smart generation control of microgrids," Applied Energy, Elsevier, vol. 317(C).
    3. Bo, Lin & Han, Lijin & Xiang, Changle & Liu, Hui & Ma, Tian, 2022. "A Q-learning fuzzy inference system based online energy management strategy for off-road hybrid electric vehicles," Energy, Elsevier, vol. 252(C).
    4. Zhang, Junjie & Jia, Rongwen & Yang, Hangjun & Dong, Kangyin, 2022. "Does electric vehicle promotion in the public sector contribute to urban transport carbon emissions reduction?," Transport Policy, Elsevier, vol. 125(C), pages 151-163.
    5. Kou, Peng & Liang, Deliang & Wang, Chen & Wu, Zihao & Gao, Lin, 2020. "Safe deep reinforcement learning-based constrained optimal control scheme for active distribution networks," Applied Energy, Elsevier, vol. 264(C).
    6. Hoda Sorouri & Arman Oshnoei & Mateja Novak & Frede Blaabjerg & Amjad Anvari-Moghaddam, 2022. "Learning-Based Model Predictive Control of DC-DC Buck Converters in DC Microgrids: A Multi-Agent Deep Reinforcement Learning Approach," Energies, MDPI, vol. 15(15), pages 1-21, July.
    7. Li, Chuang & Li, Guojie & Wang, Keyou & Han, Bei, 2022. "A multi-energy load forecasting method based on parallel architecture CNN-GRU and transfer learning for data deficient integrated energy systems," Energy, Elsevier, vol. 259(C).
    8. Du, Guodong & Zou, Yuan & Zhang, Xudong & Liu, Teng & Wu, Jinlong & He, Dingbo, 2020. "Deep reinforcement learning based energy management for a hybrid electric vehicle," Energy, Elsevier, vol. 201(C).
    9. Wu, Jiahui & Wang, Jidong & Kong, Xiangyu, 2022. "Strategic bidding in a competitive electricity market: An intelligent method using Multi-Agent Transfer Learning based on reinforcement learning," Energy, Elsevier, vol. 256(C).
    10. Zhou, Quan & Li, Ji & Shuai, Bin & Williams, Huw & He, Yinglong & Li, Ziyang & Xu, Hongming & Yan, Fuwu, 2019. "Multi-step reinforcement learning for model-free predictive energy management of an electrified off-highway vehicle," Applied Energy, Elsevier, vol. 255(C).
    11. Kapil Deshpande & Philipp Möhl & Alexander Hämmerle & Georg Weichhart & Helmut Zörrer & Andreas Pichler, 2022. "Energy Management Simulation with Multi-Agent Reinforcement Learning: An Approach to Achieve Reliability and Resilience," Energies, MDPI, vol. 15(19), pages 1-35, October.
    12. Homod, Raad Z. & Togun, Hussein & Kadhim Hussein, Ahmed & Noraldeen Al-Mousawi, Fadhel & Yaseen, Zaher Mundher & Al-Kouz, Wael & Abd, Haider J. & Alawi, Omer A. & Goodarzi, Marjan & Hussein, Omar A., 2022. "Dynamics analysis of a novel hybrid deep clustering for unsupervised learning by reinforcement of multi-agent to energy saving in intelligent buildings," Applied Energy, Elsevier, vol. 313(C).
    13. Shen, Rendong & Zhong, Shengyuan & Wen, Xin & An, Qingsong & Zheng, Ruifan & Li, Yang & Zhao, Jun, 2022. "Multi-agent deep reinforcement learning optimization framework for building energy system with renewable energy," Applied Energy, Elsevier, vol. 312(C).
    14. Wu, Tao & Wang, Jianhui & Lu, Xiaonan & Du, Yuhua, 2022. "AC/DC hybrid distribution network reconfiguration with microgrid formation using multi-agent soft actor-critic," Applied Energy, Elsevier, vol. 307(C).
    15. Sun, Wenjing & Zou, Yuan & Zhang, Xudong & Guo, Ningyuan & Zhang, Bin & Du, Guodong, 2022. "High robustness energy management strategy of hybrid electric vehicle based on improved soft actor-critic deep reinforcement learning," Energy, Elsevier, vol. 258(C).
    16. Mudhafar Al-Saadi & Maher Al-Greer & Michael Short, 2021. "Strategies for Controlling Microgrid Networks with Energy Storage Systems: A Review," Energies, MDPI, vol. 14(21), pages 1-45, November.
    17. Ganesh, Akhil Hannegudda & Xu, Bin, 2022. "A review of reinforcement learning based energy management systems for electrified powertrains: Progress, challenge, and potential solution," Renewable and Sustainable Energy Reviews, Elsevier, vol. 154(C).
    18. Sun, Fangyuan & Kong, Xiangyu & Wu, Jianzhong & Gao, Bixuan & Chen, Ke & Lu, Ning, 2022. "DSM pricing method based on A3C and LSTM under cloud-edge environment," Applied Energy, Elsevier, vol. 315(C).
    19. Zhou, Jianhao & Xue, Yuan & Xu, Da & Li, Chaoxiong & Zhao, Wanzhong, 2022. "Self-learning energy management strategy for hybrid electric vehicle via curiosity-inspired asynchronous deep reinforcement learning," Energy, Elsevier, vol. 242(C).
    20. Tao Wu & Yanghong Xia & Liang Wang & Wei Wei, 2020. "Multiagent Based Distributed Control with Time-Oriented SoC Balancing Method for DC Microgrid," Energies, MDPI, vol. 13(11), pages 1-17, June.
    21. Pannee Suanpang & Pitchaya Jamjuntr & Kittisak Jermsittiparsert & Phuripoj Kaewyong, 2022. "Autonomous Energy Management by Applying Deep Q-Learning to Enhance Sustainability in Smart Tourism Cities," Energies, MDPI, vol. 15(5), pages 1-13, March.
    22. Heidari, Amirreza & Maréchal, François & Khovalyg, Dolaana, 2022. "An occupant-centric control framework for balancing comfort, energy use and hygiene in hot water systems: A model-free reinforcement learning approach," Applied Energy, Elsevier, vol. 312(C).
    23. Chen, Zheng & Gu, Hongji & Shen, Shiquan & Shen, Jiangwei, 2022. "Energy management strategy for power-split plug-in hybrid electric vehicle based on MPC and double Q-learning," Energy, Elsevier, vol. 245(C).
    24. Du, Yan & Zandi, Helia & Kotevska, Olivera & Kurte, Kuldeep & Munk, Jeffery & Amasyali, Kadir & Mckee, Evan & Li, Fangxing, 2021. "Intelligent multi-zone residential HVAC control strategy based on deep reinforcement learning," Applied Energy, Elsevier, vol. 281(C).
    25. Du, Guodong & Zou, Yuan & Zhang, Xudong & Guo, Lingxiong & Guo, Ningyuan, 2022. "Energy management for a hybrid electric vehicle based on prioritized deep reinforcement learning framework," Energy, Elsevier, vol. 241(C).
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Alam, Md Morshed & Hossain, M.J. & Habib, Md Ahasan & Arafat, M.Y. & Hannan, M.A., 2025. "Artificial intelligence integrated grid systems: Technologies, potential frameworks, challenges, and research directions," Renewable and Sustainable Energy Reviews, Elsevier, vol. 211(C).
    2. Elinor Ginzburg-Ganz & Itay Segev & Alexander Balabanov & Elior Segev & Sivan Kaully Naveh & Ram Machlev & Juri Belikov & Liran Katzir & Sarah Keren & Yoash Levron, 2024. "Reinforcement Learning Model-Based and Model-Free Paradigms for Optimal Control Problems in Power Systems: Comprehensive Review and Future Directions," Energies, MDPI, vol. 17(21), pages 1-54, October.
    3. Amoh Mensah Akwasi & Haoyong Chen & Junfeng Liu & Otuo-Acheampong Duku, 2025. "Hybrid Adaptive Learning-Based Control for Grid-Forming Inverters: Real-Time Adaptive Voltage Regulation, Multi-Level Disturbance Rejection, and Lyapunov-Based Stability," Energies, MDPI, vol. 18(16), pages 1-29, August.
    4. Muhammad Ehtsham & Giuliana Parisi & Flavia Pedone & Federico Rossi & Marta Zincani & Eleonora Congiu & Chiara Marchionni, 2025. "AI-Powered Advanced Technologies for a Sustainable Built Environment: A Systematic Review on Emerging Challenges," Sustainability, MDPI, vol. 17(17), pages 1-45, September.
    5. Andrzej Ożadowicz & Gabriela Walczyk, 2023. "Energy Performance and Control Strategy for Dynamic Façade with Perovskite PV Panels—Technical Analysis and Case Study," Energies, MDPI, vol. 16(9), pages 1-23, April.
    6. Alejandra Tabares & Pablo Cortés, 2024. "Using Stochastic Dual Dynamic Programming to Solve the Multi-Stage Energy Management Problem in Microgrids," Energies, MDPI, vol. 17(11), pages 1-24, May.
    7. Marco Bindi & Maria Cristina Piccirilli & Antonio Luchetta & Francesco Grasso, 2023. "A Comprehensive Review of Fault Diagnosis and Prognosis Techniques in High Voltage and Medium Voltage Electrical Power Lines," Energies, MDPI, vol. 16(21), pages 1-37, October.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Daniel Egan & Qilun Zhu & Robert Prucka, 2023. "A Review of Reinforcement Learning-Based Powertrain Controllers: Effects of Agent Selection for Mixed-Continuity Control and Reward Formulation," Energies, MDPI, vol. 16(8), pages 1-31, April.
    2. Dagang Lu & Yu Chen & Yan Sun & Wenxuan Wei & Shilin Ji & Hongshuo Ruan & Fengyan Yi & Chunchun Jia & Donghai Hu & Kunpeng Tang & Song Huang & Jing Wang, 2025. "Research Progress in Multi-Domain and Cross-Domain AI Management and Control for Intelligent Electric Vehicles," Energies, MDPI, vol. 18(17), pages 1-52, August.
    3. Dong, Peng & Zhao, Junwei & Liu, Xuewu & Wu, Jian & Xu, Xiangyang & Liu, Yanfang & Wang, Shuhan & Guo, Wei, 2022. "Practical application of energy management strategy for hybrid electric vehicles based on intelligent and connected technologies: Development stages, challenges, and future trends," Renewable and Sustainable Energy Reviews, Elsevier, vol. 170(C).
    4. Gao, Qinxiang & Lei, Tao & Yao, Wenli & Zhang, Xingyu & Zhang, Xiaobin, 2023. "A health-aware energy management strategy for fuel cell hybrid electric UAVs based on safe reinforcement learning," Energy, Elsevier, vol. 283(C).
    5. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    6. Liu, Zemin Eitan & Li, Yong & Zhou, Quan & Shuai, Bin & Hua, Min & Xu, Hongming & Xu, Lubing & Tan, Guikun & Li, Yanfei, 2025. "Real-time energy management for HEV combining naturalistic driving data and deep reinforcement learning with high generalization," Applied Energy, Elsevier, vol. 377(PA).
    7. Dawei Zhong & Bolan Liu & Liang Liu & Wenhao Fan & Jingxian Tang, 2025. "Artificial Intelligence Algorithms for Hybrid Electric Powertrain System Control: A Review," Energies, MDPI, vol. 18(8), pages 1-30, April.
    8. Miranda, Matheus H.R. & Silva, Fabrício L. & Lourenço, Maria A.M. & Eckert, Jony J. & Silva, Ludmila C.A., 2022. "Vehicle drivetrain and fuzzy controller optimization using a planar dynamics simulation based on a real-world driving cycle," Energy, Elsevier, vol. 257(C).
    9. Fan Wang & Yina Hong & Xiaohuan Zhao, 2025. "Research and Comparative Analysis of Energy Management Strategies for Hybrid Electric Vehicles: A Review," Energies, MDPI, vol. 18(11), pages 1-28, May.
    10. Fuwu Yan & Jinhai Wang & Changqing Du & Min Hua, 2022. "Multi-Objective Energy Management Strategy for Hybrid Electric Vehicles Based on TD3 with Non-Parametric Reward Function," Energies, MDPI, vol. 16(1), pages 1-17, December.
    11. Hu, Rong & Zhou, Kaile & Yin, Hui, 2024. "Reinforcement learning model for incentive-based integrated demand response considering demand-side coupling," Energy, Elsevier, vol. 308(C).
    12. Iqbal, Najam & Wang, Hu & Zheng, Zunqing & Yao, Mingfa, 2024. "Reinforcement learning-based heuristic planning for optimized energy management in power-split hybrid electric heavy duty vehicles," Energy, Elsevier, vol. 302(C).
    13. Zhu, Tao & Wills, Richard G.A. & Lot, Roberto & Ruan, Haijun & Jiang, Zhihao, 2021. "Adaptive energy management of a battery-supercapacitor energy storage system for electric vehicles based on flexible perception and neural network fitting," Applied Energy, Elsevier, vol. 292(C).
    14. Tang, Tianfeng & Peng, Qianlong & Shi, Qing & Peng, Qingguo & Zhao, Jin & Chen, Chaoyi & Wang, Guangwei, 2024. "Energy management of fuel cell hybrid electric bus in mountainous regions: A deep reinforcement learning approach considering terrain characteristics," Energy, Elsevier, vol. 311(C).
    15. Han, Lijin & You, Congwen & Yang, Ningkang & Liu, Hui & Chen, Ke & Xiang, Changle, 2024. "Adaptive real-time energy management strategy using heuristic search for off-road hybrid electric vehicles," Energy, Elsevier, vol. 304(C).
    16. Zhang, Bin & Hu, Weihao & Ghias, Amer M.Y.M. & Xu, Xiao & Chen, Zhe, 2022. "Multi-agent deep reinforcement learning-based coordination control for grid-aware multi-buildings," Applied Energy, Elsevier, vol. 328(C).
    17. Penghui Qiang & Peng Wu & Tao Pan & Huaiquan Zang, 2021. "Real-Time Approximate Equivalent Consumption Minimization Strategy Based on the Single-Shaft Parallel Hybrid Powertrain," Energies, MDPI, vol. 14(23), pages 1-22, November.
    18. Zhang, Dongfang & Sun, Wei & Zou, Yuan & Zhang, Xudong, 2025. "Energy management in HDHEV with dual APUs: Enhancing soft actor-critic using clustered experience replay and multi-dimensional priority sampling," Energy, Elsevier, vol. 319(C).
    19. Qi, Chunyang & Zhu, Yiwen & Song, Chuanxue & Yan, Guangfu & Xiao, Feng & Da wang, & Zhang, Xu & Cao, Jingwei & Song, Shixin, 2022. "Hierarchical reinforcement learning based energy management strategy for hybrid electric vehicle," Energy, Elsevier, vol. 238(PA).
    20. Zhang, Hao & Lei, Nuo & Chen, Boli & Li, Bingbing & Li, Rulong & Wang, Zhi, 2024. "Modeling and control system optimization for electrified vehicles: A data-driven approach," Energy, Elsevier, vol. 310(C).

    More about this item

    Keywords

    ;
    ;
    ;
    ;
    ;
    ;
    ;

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jeners:v:16:y:2023:i:4:p:1608-:d:1059125. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.