IDEAS home Printed from https://ideas.repec.org/a/eee/appene/v329y2023ics0306261922014696.html
   My bibliography  Save this article

Reinforcement learning in deregulated energy market: A comprehensive review

Author

Listed:
  • Zhu, Ziqing
  • Hu, Ze
  • Chan, Ka Wing
  • Bu, Siqi
  • Zhou, Bin
  • Xia, Shiwei

Abstract

The increasing penetration of renewable generations, along with the deregulation and marketization of power industry, promotes the transformation of energy market operation paradigms. The optimal bidding strategy and dispatching methodologies under these new paradigms are prioritized concerns for both market participants and power system operators. In contrast with conventional solution methodologies, the Reinforcement Learning (RL), as an emerging machine learning technique that exhibits a more favorable computational performance, is playing an increasingly significant role in both academia and industry. This paper presents a comprehensive review of RL applications in deregulated energy market operation including bidding and dispatching strategy optimization, based on more than 150 carefully selected papers. For each application, apart from a paradigmatic summary of generalized methodology, in-depth discussions of applicability and obstacles while deploying RL techniques are also provided. Finally, some RL techniques that have great potentiality to be deployed in bidding and dispatching problems are recommended and discussed.

Suggested Citation

  • Zhu, Ziqing & Hu, Ze & Chan, Ka Wing & Bu, Siqi & Zhou, Bin & Xia, Shiwei, 2023. "Reinforcement learning in deregulated energy market: A comprehensive review," Applied Energy, Elsevier, vol. 329(C).
  • Handle: RePEc:eee:appene:v:329:y:2023:i:c:s0306261922014696
    DOI: 10.1016/j.apenergy.2022.120212
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0306261922014696
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.apenergy.2022.120212?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Wang, Xiaodi & Liu, Youbo & Zhao, Junbo & Liu, Chang & Liu, Junyong & Yan, Jinyue, 2021. "Surrogate model enabled deep reinforcement learning for hybrid energy community operation," Applied Energy, Elsevier, vol. 289(C).
    2. Nadeem Javaid & Sakeena Javaid & Wadood Abdul & Imran Ahmed & Ahmad Almogren & Atif Alamri & Iftikhar Azim Niaz, 2017. "A Hybrid Genetic Wind Driven Heuristic Optimization Algorithm for Demand Side Management in Smart Grid," Energies, MDPI, vol. 10(3), pages 1-27, March.
    3. Harrold, Daniel J.B. & Cao, Jun & Fan, Zhong, 2022. "Data-driven battery operation for energy arbitrage using rainbow deep reinforcement learning," Energy, Elsevier, vol. 238(PC).
    4. Li, Jiawen & Yu, Tao & Zhang, Xiaoshun & Li, Fusheng & Lin, Dan & Zhu, Hanxin, 2021. "Efficient experience replay based deep deterministic policy gradient for AGC dispatch in integrated energy system," Applied Energy, Elsevier, vol. 285(C).
    5. Wen, Lulu & Zhou, Kaile & Li, Jun & Wang, Shanyong, 2020. "Modified deep learning and reinforcement learning for an incentive-based demand response model," Energy, Elsevier, vol. 205(C).
    6. Shang, Yuwei & Wu, Wenchuan & Guo, Jianbo & Ma, Zhao & Sheng, Wanxing & Lv, Zhe & Fu, Chenran, 2020. "Stochastic dispatch of energy storage in microgrids: An augmented reinforcement learning approach," Applied Energy, Elsevier, vol. 261(C).
    7. Ying Ji & Jianhui Wang & Jiacan Xu & Xiaoke Fang & Huaguang Zhang, 2019. "Real-Time Energy Management of a Microgrid Using Deep Reinforcement Learning," Energies, MDPI, vol. 12(12), pages 1-21, June.
    8. Zhu, Ziqing & Wing Chan, Ka & Bu, Siqi & Zhou, Bin & Xia, Shiwei, 2021. "Real-Time interaction of active distribution network and virtual microgrids: Market paradigm and data-driven stakeholder behavior analysis," Applied Energy, Elsevier, vol. 297(C).
    9. Chen, Yue & Wei, Wei & Liu, Feng & Mei, Shengwei, 2016. "Distributionally robust hydro-thermal-wind economic dispatch," Applied Energy, Elsevier, vol. 173(C), pages 511-519.
    10. Ross, Martin T., 2018. "The future of the electricity industry: Implications of trends and taxes," Energy Economics, Elsevier, vol. 73(C), pages 393-409.
    11. Zhang, Chenghua & Wu, Jianzhong & Zhou, Yue & Cheng, Meng & Long, Chao, 2018. "Peer-to-Peer energy trading in a Microgrid," Applied Energy, Elsevier, vol. 220(C), pages 1-12.
    12. Lu, Renzhi & Li, Yi-Chang & Li, Yuting & Jiang, Junhui & Ding, Yuemin, 2020. "Multi-agent deep reinforcement learning based demand response for discrete manufacturing systems energy management," Applied Energy, Elsevier, vol. 276(C).
    13. Lu, Renzhi & Hong, Seung Ho & Zhang, Xiongfeng, 2018. "A Dynamic pricing demand response algorithm for smart grid: Reinforcement learning approach," Applied Energy, Elsevier, vol. 220(C), pages 220-230.
    14. Hua, Haochen & Qin, Yuchao & Hao, Chuantong & Cao, Junwei, 2019. "Optimal energy management strategies for energy Internet via deep reinforcement learning approach," Applied Energy, Elsevier, vol. 239(C), pages 598-609.
    15. Jin-Gyeom Kim & Bowon Lee, 2020. "Automatic P2P Energy Trading Model Based on Reinforcement Learning Using Long Short-Term Delayed Reward," Energies, MDPI, vol. 13(20), pages 1-27, October.
    16. Zhang, Xiongfeng & Lu, Renzhi & Jiang, Junhui & Hong, Seung Ho & Song, Won Seok, 2021. "Testbed implementation of reinforcement learning-based demand response energy management system," Applied Energy, Elsevier, vol. 297(C).
    17. Hu, Qian & Zhu, Ziqing & Bu, Siqi & Wing Chan, Ka & Li, Fangxing, 2021. "A multi-market nanogrid P2P energy and ancillary service trading paradigm: Mechanisms and implementations," Applied Energy, Elsevier, vol. 293(C).
    18. Fiuza de Bragança, Gabriel Godofredo & Daglish, Toby, 2016. "Can market power in the electricity spot market translate into market power in the hedge market?," Energy Economics, Elsevier, vol. 58(C), pages 11-26.
    19. Meng, Fanyi & Bai, Yang & Jin, Jingliang, 2021. "An advanced real-time dispatching strategy for a distributed energy system based on the reinforcement learning algorithm," Renewable Energy, Elsevier, vol. 178(C), pages 13-24.
    20. Kong, Xiangyu & Liu, Dehong & Xiao, Jie & Wang, Chengshan, 2019. "A multi-agent optimal bidding strategy in microgrids based on artificial immune system," Energy, Elsevier, vol. 189(C).
    21. Oh, Seok Hwa & Yoon, Yong Tae & Kim, Seung Wan, 2020. "Online reconfiguration scheme of self-sufficient distribution network based on a reinforcement learning approach," Applied Energy, Elsevier, vol. 280(C).
    22. Panos, Evangelos & Densing, Martin, 2019. "The future developments of the electricity prices in view of the implementation of the Paris Agreements: Will the current trends prevail, or a reversal is ahead?," Energy Economics, Elsevier, vol. 84(C).
    23. Seongwoo Lee & Joonho Seon & Chanuk Kyeong & Soohyun Kim & Youngghyu Sun & Jinyoung Kim, 2021. "Novel Energy Trading System Based on Deep-Reinforcement Learning in Microgrids," Energies, MDPI, vol. 14(17), pages 1-14, September.
    24. Brida V. Mbuwir & Frederik Ruelens & Fred Spiessens & Geert Deconinck, 2017. "Battery Energy Management in a Microgrid Using Batch Reinforcement Learning," Energies, MDPI, vol. 10(11), pages 1-19, November.
    25. Perera, A.T.D. & Kamalaruban, Parameswaran, 2021. "Applications of reinforcement learning in energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 137(C).
    26. Kuznetsova, Elizaveta & Li, Yan-Fu & Ruiz, Carlos & Zio, Enrico & Ault, Graham & Bell, Keith, 2013. "Reinforcement learning for microgrid energy management," Energy, Elsevier, vol. 59(C), pages 133-146.
    27. Hosseini, Seyyed Ahmad & Toubeau, Jean-François & De Grève, Zacharie & Vallée, François, 2020. "An advanced day-ahead bidding strategy for wind power producers considering confidence level on the real-time reserve provision," Applied Energy, Elsevier, vol. 280(C).
    28. Du, Guodong & Zou, Yuan & Zhang, Xudong & Guo, Lingxiong & Guo, Ningyuan, 2022. "Energy management for a hybrid electric vehicle based on prioritized deep reinforcement learning framework," Energy, Elsevier, vol. 241(C).
    29. Guo, Chenyu & Wang, Xin & Zheng, Yihui & Zhang, Feng, 2022. "Real-time optimal energy management of microgrid with uncertainties based on deep reinforcement learning," Energy, Elsevier, vol. 238(PC).
    30. Kong, Xiangyu & Kong, Deqian & Yao, Jingtao & Bai, Linquan & Xiao, Jie, 2020. "Online pricing of demand response based on long short-term memory and reinforcement learning," Applied Energy, Elsevier, vol. 271(C).
    31. Chuanjia Han & Bo Yang & Tao Bao & Tao Yu & Xiaoshun Zhang, 2017. "Bacteria Foraging Reinforcement Learning for Risk-Based Economic Dispatch via Knowledge Transfer," Energies, MDPI, vol. 10(5), pages 1-24, May.
    32. Yang, Ting & Zhao, Liyuan & Li, Wei & Zomaya, Albert Y., 2021. "Dynamic energy dispatch strategy for integrated energy system based on improved deep reinforcement learning," Energy, Elsevier, vol. 235(C).
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Zhao, Yincheng & Zhang, Guozhou & Hu, Weihao & Huang, Qi & Chen, Zhe & Blaabjerg, Frede, 2023. "Meta-learning based voltage control strategy for emergency faults of active distribution networks," Applied Energy, Elsevier, vol. 349(C).

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    2. Bio Gassi, Karim & Baysal, Mustafa, 2023. "Improving real-time energy decision-making model with an actor-critic agent in modern microgrids with energy storage devices," Energy, Elsevier, vol. 263(PE).
    3. Zhou, Yanting & Ma, Zhongjing & Zhang, Jinhui & Zou, Suli, 2022. "Data-driven stochastic energy management of multi energy system using deep reinforcement learning," Energy, Elsevier, vol. 261(PA).
    4. Dimitrios Vamvakas & Panagiotis Michailidis & Christos Korkas & Elias Kosmatopoulos, 2023. "Review and Evaluation of Reinforcement Learning Frameworks on Smart Grid Applications," Energies, MDPI, vol. 16(14), pages 1-38, July.
    5. Grace Muriithi & Sunetra Chowdhury, 2021. "Optimal Energy Management of a Grid-Tied Solar PV-Battery Microgrid: A Reinforcement Learning Approach," Energies, MDPI, vol. 14(9), pages 1-24, May.
    6. Soleimanzade, Mohammad Amin & Kumar, Amit & Sadrzadeh, Mohtada, 2022. "Novel data-driven energy management of a hybrid photovoltaic-reverse osmosis desalination system using deep reinforcement learning," Applied Energy, Elsevier, vol. 317(C).
    7. Sabarathinam Srinivasan & Suresh Kumarasamy & Zacharias E. Andreadakis & Pedro G. Lind, 2023. "Artificial Intelligence and Mathematical Models of Power Grids Driven by Renewable Energy Sources: A Survey," Energies, MDPI, vol. 16(14), pages 1-56, July.
    8. Zhu, Dafeng & Yang, Bo & Liu, Yuxiang & Wang, Zhaojian & Ma, Kai & Guan, Xinping, 2022. "Energy management based on multi-agent deep reinforcement learning for a multi-energy industrial park," Applied Energy, Elsevier, vol. 311(C).
    9. Zheng, Lingwei & Wu, Hao & Guo, Siqi & Sun, Xinyu, 2023. "Real-time dispatch of an integrated energy system based on multi-stage reinforcement learning with an improved action-choosing strategy," Energy, Elsevier, vol. 277(C).
    10. Qiu, Dawei & Ye, Yujian & Papadaskalopoulos, Dimitrios & Strbac, Goran, 2021. "Scalable coordinated management of peer-to-peer energy trading: A multi-cluster deep reinforcement learning approach," Applied Energy, Elsevier, vol. 292(C).
    11. Xie, Jiahan & Ajagekar, Akshay & You, Fengqi, 2023. "Multi-Agent attention-based deep reinforcement learning for demand response in grid-responsive buildings," Applied Energy, Elsevier, vol. 342(C).
    12. Eduardo J. Salazar & Mauro Jurado & Mauricio E. Samper, 2023. "Reinforcement Learning-Based Pricing and Incentive Strategy for Demand Response in Smart Grids," Energies, MDPI, vol. 16(3), pages 1-33, February.
    13. Yang, Ting & Zhao, Liyuan & Li, Wei & Zomaya, Albert Y., 2021. "Dynamic energy dispatch strategy for integrated energy system based on improved deep reinforcement learning," Energy, Elsevier, vol. 235(C).
    14. Zhang, Li & Gao, Yan & Zhu, Hongbo & Tao, Li, 2022. "Bi-level stochastic real-time pricing model in multi-energy generation system: A reinforcement learning approach," Energy, Elsevier, vol. 239(PA).
    15. Harrold, Daniel J.B. & Cao, Jun & Fan, Zhong, 2022. "Renewable energy integration and microgrid energy trading using multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 318(C).
    16. Lu, Renzhi & Bai, Ruichang & Ding, Yuemin & Wei, Min & Jiang, Junhui & Sun, Mingyang & Xiao, Feng & Zhang, Hai-Tao, 2021. "A hybrid deep learning-based online energy management scheme for industrial microgrid," Applied Energy, Elsevier, vol. 304(C).
    17. Zhao, Liyuan & Yang, Ting & Li, Wei & Zomaya, Albert Y., 2022. "Deep reinforcement learning-based joint load scheduling for household multi-energy system," Applied Energy, Elsevier, vol. 324(C).
    18. Yi Kuang & Xiuli Wang & Hongyang Zhao & Yijun Huang & Xianlong Chen & Xifan Wang, 2020. "Agent-Based Energy Sharing Mechanism Using Deep Deterministic Policy Gradient Algorithm," Energies, MDPI, vol. 13(19), pages 1-20, September.
    19. Khawaja Haider Ali & Mohammad Abusara & Asif Ali Tahir & Saptarshi Das, 2023. "Dual-Layer Q-Learning Strategy for Energy Management of Battery Storage in Grid-Connected Microgrids," Energies, MDPI, vol. 16(3), pages 1-17, January.
    20. Zhang, Bin & Wu, Xuewei & Ghias, Amer M.Y.M. & Chen, Zhe, 2023. "Coordinated carbon capture systems and power-to-gas dynamic economic energy dispatch strategy for electricity–gas coupled systems considering system uncertainty: An improved soft actor–critic approach," Energy, Elsevier, vol. 271(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:329:y:2023:i:c:s0306261922014696. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.