IDEAS home Printed from https://ideas.repec.org/a/eee/rensus/v173y2023ics1364032122009339.html
   My bibliography  Save this article

Reinforcement learning for electric vehicle applications in power systems:A critical review

Author

Listed:
  • Qiu, Dawei
  • Wang, Yi
  • Hua, Weiqi
  • Strbac, Goran

Abstract

Electric vehicles (EVs) are playing an important role in power systems due to their significant mobility and flexibility features. Nowadays, the increasing penetration of renewable energy resources has been observed in modern power systems, which brings many benefits for improving climate change and accelerating the low-carbon transition. However, the intermittent and unstable nature of renewable energy sources introduces new challenges to both the planning and operation of power systems. To address these issues, vehicle-to-grid (V2G) technology has been gradually recognized as a valid solution to provide various ancillary service provisions for power systems. Many studies have developed model-based optimization methods for EV dispatch problems. Nevertheless, this type of method cannot effectively handle the highly dynamic and stochastic environment due to the complexity of power systems. Reinforcement learning (RL), a model-free and online learning method, can capture various uncertainties through numerous interactions with the environment and adapt to various state conditions in real-time. As a result, using advanced RL algorithms to solve various EV dispatch problems has attracted a surge of attention in recent years, leading to many outstanding research papers and important findings. This paper provides a comprehensive review of popular RL algorithms categorized by single-agent RL and multi-agent RL, and summarizes how these advanced algorithms can be applied to various EV dispatch problems, including grid-to-vehicle (G2V), vehicle-to-home (V2H), and V2G. Finally, key challenges and important future research directions are discussed, which involve five aspects: (a) data quality and availability; (b) environment setup; (c) safety and robustness; (d) training performance; and (e) real-world deployment.

Suggested Citation

  • Qiu, Dawei & Wang, Yi & Hua, Weiqi & Strbac, Goran, 2023. "Reinforcement learning for electric vehicle applications in power systems:A critical review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 173(C).
  • Handle: RePEc:eee:rensus:v:173:y:2023:i:c:s1364032122009339
    DOI: 10.1016/j.rser.2022.113052
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S1364032122009339
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.rser.2022.113052?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Qiu, Dawei & Dong, Zihang & Zhang, Xi & Wang, Yi & Strbac, Goran, 2022. "Safe reinforcement learning for real-time automatic control in a smart energy-hub," Applied Energy, Elsevier, vol. 309(C).
    2. Qiu, Dawei & Ye, Yujian & Papadaskalopoulos, Dimitrios & Strbac, Goran, 2021. "Scalable coordinated management of peer-to-peer energy trading: A multi-cluster deep reinforcement learning approach," Applied Energy, Elsevier, vol. 292(C).
    3. Wang, Zhe & Hong, Tianzhen, 2020. "Reinforcement learning for building controls: The opportunities and challenges," Applied Energy, Elsevier, vol. 269(C).
    4. DeForest, Nicholas & MacDonald, Jason S. & Black, Douglas R., 2018. "Day ahead optimization of an electric vehicle fleet providing ancillary services in the Los Angeles Air Force Base vehicle-to-grid demonstration," Applied Energy, Elsevier, vol. 210(C), pages 987-1001.
    5. Zhou, Yue & Wu, Jianzhong & Song, Guanyu & Long, Chao, 2020. "Framework design and optimal bidding strategy for ancillary service provision from a peer-to-peer energy trading community," Applied Energy, Elsevier, vol. 278(C).
    6. Gonzalez Venegas, Felipe & Petit, Marc & Perez, Yannick, 2021. "Active integration of electric vehicles into distribution grids: Barriers and frameworks for flexibility services," Renewable and Sustainable Energy Reviews, Elsevier, vol. 145(C).
    7. Dowling, Paul, 2013. "The impact of climate change on the European energy system," Energy Policy, Elsevier, vol. 60(C), pages 406-417.
    8. Bellocchi, Sara & Klöckner, Kai & Manno, Michele & Noussan, Michel & Vellini, Michela, 2019. "On the role of electric vehicles towards low-carbon energy systems: Italy and Germany in comparison," Applied Energy, Elsevier, vol. 255(C).
    9. Dorokhova, Marina & Martinson, Yann & Ballif, Christophe & Wyrsch, Nicolas, 2021. "Deep reinforcement learning control of electric vehicle charging in the presence of photovoltaic generation," Applied Energy, Elsevier, vol. 301(C).
    10. Shaukat, N. & Khan, B. & Ali, S.M. & Mehmood, C.A. & Khan, J. & Farid, U. & Majid, M. & Anwar, S.M. & Jawad, M. & Ullah, Z., 2018. "A survey on electric vehicle transportation within smart grid system," Renewable and Sustainable Energy Reviews, Elsevier, vol. 81(P1), pages 1329-1349.
    11. Ruisheng Wang & Zhong Chen & Qiang Xing & Ziqi Zhang & Tian Zhang, 2022. "A Modified Rainbow-Based Deep Reinforcement Learning Method for Optimal Scheduling of Charging Station," Sustainability, MDPI, vol. 14(3), pages 1-14, February.
    12. Jiang, C.X. & Jing, Z.X. & Cui, X.R. & Ji, T.Y. & Wu, Q.H., 2018. "Multiple agents and reinforcement learning for modelling charging loads of electric taxis," Applied Energy, Elsevier, vol. 222(C), pages 158-168.
    13. Peng, Minghong & Liu, Lian & Jiang, Chuanwen, 2012. "A review on the economic dispatch and risk management of the large-scale plug-in electric vehicles (PHEVs)-penetrated power systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 16(3), pages 1508-1515.
    14. Hussain, Akhtar & Bui, Van-Hai & Kim, Hak-Man, 2019. "Microgrids as a resilience resource and strategies used by microgrids for enhancing resilience," Applied Energy, Elsevier, vol. 240(C), pages 56-72.
    15. Wang, Xue-Chao & Klemeš, Jiří Jaromír & Dong, Xiaobin & Fan, Weiguo & Xu, Zihan & Wang, Yutao & Varbanov, Petar Sabev, 2019. "Air pollution terrain nexus: A review considering energy generation and consumption," Renewable and Sustainable Energy Reviews, Elsevier, vol. 105(C), pages 71-85.
    16. Qiu, Dawei & Wang, Yi & Sun, Mingyang & Strbac, Goran, 2022. "Multi-service provision for electric vehicles in power-transportation networks towards a low-carbon transition: A hierarchical and hybrid multi-agent reinforcement learning approach," Applied Energy, Elsevier, vol. 313(C).
    17. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    18. Ruan, Guangchun & Wu, Jiahan & Zhong, Haiwang & Xia, Qing & Xie, Le, 2021. "Quantitative assessment of U.S. bulk power systems and market operations during the COVID-19 pandemic," Applied Energy, Elsevier, vol. 286(C).
    19. Wang, Y. & Rousis, A. Oulis & Strbac, G., 2022. "Resilience-driven optimal sizing and pre-positioning of mobile energy storage systems in decentralized networked microgrids," Applied Energy, Elsevier, vol. 305(C).
    20. Shang, Wen-Long & Chen, Jinyu & Bi, Huibo & Sui, Yi & Chen, Yanyan & Yu, Haitao, 2021. "Impacts of COVID-19 pandemic on user behaviors and environmental benefits of bike sharing: A big-data analysis," Applied Energy, Elsevier, vol. 285(C).
    21. Lopion, Peter & Markewitz, Peter & Robinius, Martin & Stolten, Detlef, 2018. "A review of current challenges and trends in energy systems modeling," Renewable and Sustainable Energy Reviews, Elsevier, vol. 96(C), pages 156-166.
    22. Perera, A.T.D. & Kamalaruban, Parameswaran, 2021. "Applications of reinforcement learning in energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 137(C).
    23. Wang, Yi & Qiu, Dawei & Strbac, Goran, 2022. "Multi-agent deep reinforcement learning for resilience-driven routing and scheduling of mobile energy storage systems," Applied Energy, Elsevier, vol. 310(C).
    24. Vázquez-Canteli, José R. & Nagy, Zoltán, 2019. "Reinforcement learning for demand response: A review of algorithms and modeling techniques," Applied Energy, Elsevier, vol. 235(C), pages 1072-1089.
    25. Jaehyun Lee & Eunjung Lee & Jinho Kim, 2020. "Electric Vehicle Charging and Discharging Algorithm Based on Reinforcement Learning with Data-Driven Approach in Dynamic Pricing Scheme," Energies, MDPI, vol. 13(8), pages 1-18, April.
    26. Bhatti, Ghanishtha & Mohan, Harshit & Raja Singh, R., 2021. "Towards the future of smart electric vehicles: Digital twin technology," Renewable and Sustainable Energy Reviews, Elsevier, vol. 141(C).
    27. Cheng Wang & Zhou Gao & Peng Yang & Zhenpo Wang & Zhiheng Li, 2021. "Electric Vehicle Charging Facility Planning Based on Flow Demand—A Case Study," Sustainability, MDPI, vol. 13(9), pages 1-23, April.
    28. Tuchnitz, Felix & Ebell, Niklas & Schlund, Jonas & Pruckner, Marco, 2021. "Development and Evaluation of a Smart Charging Strategy for an Electric Vehicle Fleet Based on Reinforcement Learning," Applied Energy, Elsevier, vol. 285(C).
    29. Alqahtani, Mohammed & Hu, Mengqi, 2022. "Dynamic energy scheduling and routing of multiple electric vehicles using deep reinforcement learning," Energy, Elsevier, vol. 244(PA).
    30. Lee, Sangyoon & Choi, Dae-Hyun, 2021. "Dynamic pricing and energy management for profit maximization in multiple smart electric vehicle charging stations: A privacy-preserving deep reinforcement learning approach," Applied Energy, Elsevier, vol. 304(C).
    31. Guo, Chenyu & Wang, Xin & Zheng, Yihui & Zhang, Feng, 2022. "Real-time optimal energy management of microgrid with uncertainties based on deep reinforcement learning," Energy, Elsevier, vol. 238(PC).
    32. Balali, Yasaman & Stegen, Sascha, 2021. "Review of energy storage systems for vehicles based on technology, environmental impacts, and costs," Renewable and Sustainable Energy Reviews, Elsevier, vol. 135(C).
    33. Yang, Zhile & Li, Kang & Foley, Aoife, 2015. "Computational scheduling methods for integrating plug-in electric vehicles with power systems: A review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 51(C), pages 396-416.
    34. Wang, Yi & Rousis, Anastasios Oulis & Strbac, Goran, 2020. "On microgrids and resilience: A comprehensive review on modeling and operational strategies," Renewable and Sustainable Energy Reviews, Elsevier, vol. 134(C).
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Wang, Yi & Qiu, Dawei & He, Yinglong & Zhou, Quan & Strbac, Goran, 2023. "Multi-agent reinforcement learning for electric vehicle decarbonized routing and scheduling," Energy, Elsevier, vol. 284(C).
    2. Zhen Huang & Xuechun Xiao & Yuan Gao & Yonghong Xia & Tomislav Dragičević & Pat Wheeler, 2023. "Emerging Information Technologies for the Energy Management of Onboard Microgrids in Transportation Applications," Energies, MDPI, vol. 16(17), pages 1-26, August.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    2. Homod, Raad Z. & Togun, Hussein & Kadhim Hussein, Ahmed & Noraldeen Al-Mousawi, Fadhel & Yaseen, Zaher Mundher & Al-Kouz, Wael & Abd, Haider J. & Alawi, Omer A. & Goodarzi, Marjan & Hussein, Omar A., 2022. "Dynamics analysis of a novel hybrid deep clustering for unsupervised learning by reinforcement of multi-agent to energy saving in intelligent buildings," Applied Energy, Elsevier, vol. 313(C).
    3. Dimitrios Vamvakas & Panagiotis Michailidis & Christos Korkas & Elias Kosmatopoulos, 2023. "Review and Evaluation of Reinforcement Learning Frameworks on Smart Grid Applications," Energies, MDPI, vol. 16(14), pages 1-38, July.
    4. Qiu, Dawei & Wang, Yi & Sun, Mingyang & Strbac, Goran, 2022. "Multi-service provision for electric vehicles in power-transportation networks towards a low-carbon transition: A hierarchical and hybrid multi-agent reinforcement learning approach," Applied Energy, Elsevier, vol. 313(C).
    5. Qiu, Dawei & Dong, Zihang & Zhang, Xi & Wang, Yi & Strbac, Goran, 2022. "Safe reinforcement learning for real-time automatic control in a smart energy-hub," Applied Energy, Elsevier, vol. 309(C).
    6. Harrold, Daniel J.B. & Cao, Jun & Fan, Zhong, 2022. "Renewable energy integration and microgrid energy trading using multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 318(C).
    7. Fescioglu-Unver, Nilgun & Yıldız Aktaş, Melike, 2023. "Electric vehicle charging service operations: A review of machine learning applications for infrastructure planning, control, pricing and routing," Renewable and Sustainable Energy Reviews, Elsevier, vol. 188(C).
    8. Biemann, Marco & Scheller, Fabian & Liu, Xiufeng & Huang, Lizhen, 2021. "Experimental evaluation of model-free reinforcement learning algorithms for continuous HVAC control," Applied Energy, Elsevier, vol. 298(C).
    9. Qiu, Dawei & Wang, Yi & Zhang, Tingqi & Sun, Mingyang & Strbac, Goran, 2023. "Hierarchical multi-agent reinforcement learning for repair crews dispatch control towards multi-energy microgrid resilience," Applied Energy, Elsevier, vol. 336(C).
    10. Caputo, Cesare & Cardin, Michel-Alexandre & Ge, Pudong & Teng, Fei & Korre, Anna & Antonio del Rio Chanona, Ehecatl, 2023. "Design and planning of flexible mobile Micro-Grids using Deep Reinforcement Learning," Applied Energy, Elsevier, vol. 335(C).
    11. Wang, Yi & Qiu, Dawei & Strbac, Goran, 2022. "Multi-agent deep reinforcement learning for resilience-driven routing and scheduling of mobile energy storage systems," Applied Energy, Elsevier, vol. 310(C).
    12. Wang, Yi & Qiu, Dawei & Sun, Mingyang & Strbac, Goran & Gao, Zhiwei, 2023. "Secure energy management of multi-energy microgrid: A physical-informed safe reinforcement learning approach," Applied Energy, Elsevier, vol. 335(C).
    13. Li, Yanxue & Wang, Zixuan & Xu, Wenya & Gao, Weijun & Xu, Yang & Xiao, Fu, 2023. "Modeling and energy dynamic control for a ZEH via hybrid model-based deep reinforcement learning," Energy, Elsevier, vol. 277(C).
    14. Harrold, Daniel J.B. & Cao, Jun & Fan, Zhong, 2022. "Data-driven battery operation for energy arbitrage using rainbow deep reinforcement learning," Energy, Elsevier, vol. 238(PC).
    15. Pinto, Giuseppe & Kathirgamanathan, Anjukan & Mangina, Eleni & Finn, Donal P. & Capozzoli, Alfonso, 2022. "Enhancing energy management in grid-interactive buildings: A comparison among cooperative and coordinated architectures," Applied Energy, Elsevier, vol. 310(C).
    16. Touzani, Samir & Prakash, Anand Krishnan & Wang, Zhe & Agarwal, Shreya & Pritoni, Marco & Kiran, Mariam & Brown, Richard & Granderson, Jessica, 2021. "Controlling distributed energy resources via deep reinforcement learning for load flexibility and energy efficiency," Applied Energy, Elsevier, vol. 304(C).
    17. Zeng, Lanting & Qiu, Dawei & Sun, Mingyang, 2022. "Resilience enhancement of multi-agent reinforcement learning-based demand response against adversarial attacks," Applied Energy, Elsevier, vol. 324(C).
    18. Xie, Jiahan & Ajagekar, Akshay & You, Fengqi, 2023. "Multi-Agent attention-based deep reinforcement learning for demand response in grid-responsive buildings," Applied Energy, Elsevier, vol. 342(C).
    19. Ahmad Almaghrebi & Fares Aljuheshi & Mostafa Rafaie & Kevin James & Mahmoud Alahmad, 2020. "Data-Driven Charging Demand Prediction at Public Charging Stations Using Supervised Machine Learning Regression Methods," Energies, MDPI, vol. 13(16), pages 1-21, August.
    20. Li, Sichen & Hu, Weihao & Cao, Di & Chen, Zhe & Huang, Qi & Blaabjerg, Frede & Liao, Kaiji, 2023. "Physics-model-free heat-electricity energy management of multiple microgrids based on surrogate model-enabled multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 346(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:rensus:v:173:y:2023:i:c:s1364032122009339. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/600126/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.