IDEAS home Printed from https://ideas.repec.org/a/gam/jeners/v15y2022i13p4582-d845742.html
   My bibliography  Save this article

Online Charging Strategy for Electric Vehicle Clusters Based on Multi-Agent Reinforcement Learning and Long–Short Memory Networks

Author

Listed:
  • Xianhao Shen

    (College of Information Science and Engineering, Guilin University of Technology, Guilin 541006, China)

  • Yexin Zhang

    (College of Information Science and Engineering, Guilin University of Technology, Guilin 541006, China)

  • Decheng Wang

    (College of Information Science and Engineering, Guilin University of Technology, Guilin 541006, China)

Abstract

The electric vehicle (EV) cluster charging strategy is a key factor affecting the grid load shifting in vehicle-to-grid (V2G) mode. The conflict between variable tariffs and electric-powered energy demand at different times of the day directly affects the charging cost, and in the worst case, can even lead to the collapse of the whole grid. In this paper, we propose a multi-agent reinforcement learning and long-short memory network (LSTM)-based online charging strategy for community home EV clusters to solve the grid load problem and minimize the charging cost while ensuring benign EV cluster charging loads. In this paper, the accurate prediction of grid prices is achieved through LSTM networks, and the optimal charging strategy is derived from the MADDPG multi-agent reinforcement learning algorithm. The simulation results show that, compared with the DNQ algorithm, the EV cluster online charging strategy algorithm can effectively reduce the overall charging cost by about 5.8% by dynamically adjusting the charging power at each time period while maintaining the grid load balance.

Suggested Citation

  • Xianhao Shen & Yexin Zhang & Decheng Wang, 2022. "Online Charging Strategy for Electric Vehicle Clusters Based on Multi-Agent Reinforcement Learning and Long–Short Memory Networks," Energies, MDPI, vol. 15(13), pages 1-14, June.
  • Handle: RePEc:gam:jeners:v:15:y:2022:i:13:p:4582-:d:845742
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1996-1073/15/13/4582/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1996-1073/15/13/4582/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Lu, Renzhi & Hong, Seung Ho, 2019. "Incentive-based demand response for smart grid with reinforcement learning and deep neural network," Applied Energy, Elsevier, vol. 236(C), pages 937-949.
    2. Tuchnitz, Felix & Ebell, Niklas & Schlund, Jonas & Pruckner, Marco, 2021. "Development and Evaluation of a Smart Charging Strategy for an Electric Vehicle Fleet Based on Reinforcement Learning," Applied Energy, Elsevier, vol. 285(C).
    3. Frank Schneider & Ulrich W. Thonemann & Diego Klabjan, 2018. "Optimization of Battery Charging and Purchasing at Electric Vehicle Battery Swap Stations," Transportation Science, INFORMS, vol. 52(5), pages 1211-1234, October.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    2. Jin, Ruiyang & Zhou, Yuke & Lu, Chao & Song, Jie, 2022. "Deep reinforcement learning-based strategy for charging station participating in demand response," Applied Energy, Elsevier, vol. 328(C).
    3. Dominique Barth & Benjamin Cohen-Boulakia & Wilfried Ehounou, 2022. "Distributed Reinforcement Learning for the Management of a Smart Grid Interconnecting Independent Prosumers," Energies, MDPI, vol. 15(4), pages 1-19, February.
    4. Tsoumalis, Georgios I. & Bampos, Zafeirios N. & Biskas, Pandelis N. & Keranidis, Stratos D. & Symeonidis, Polychronis A. & Voulgarakis, Dimitrios K., 2022. "A novel system for providing explicit demand response from domestic natural gas boilers," Applied Energy, Elsevier, vol. 317(C).
    5. Zhang, Yang & Yang, Qingyu & Li, Donghe & An, Dou, 2022. "A reinforcement and imitation learning method for pricing strategy of electricity retailer with customers’ flexibility," Applied Energy, Elsevier, vol. 323(C).
    6. Correa-Jullian, Camila & López Droguett, Enrique & Cardemil, José Miguel, 2020. "Operation scheduling in a solar thermal system: A reinforcement learning-based framework," Applied Energy, Elsevier, vol. 268(C).
    7. Samuel M. Muhindo & Roland P. Malhamé & Geza Joos, 2021. "A Novel Mean Field Game-Based Strategy for Charging Electric Vehicles in Solar Powered Parking Lots," Energies, MDPI, vol. 14(24), pages 1-21, December.
    8. Ibrahim, Muhammad Sohail & Dong, Wei & Yang, Qiang, 2020. "Machine learning driven smart electric power systems: Current trends and new perspectives," Applied Energy, Elsevier, vol. 272(C).
    9. Asadi, Amin & Nurre Pinkley, Sarah, 2021. "A stochastic scheduling, allocation, and inventory replenishment problem for battery swap stations," Transportation Research Part E: Logistics and Transportation Review, Elsevier, vol. 146(C).
    10. Pinto, Giuseppe & Piscitelli, Marco Savino & Vázquez-Canteli, José Ramón & Nagy, Zoltán & Capozzoli, Alfonso, 2021. "Coordinated energy management for a cluster of buildings through deep reinforcement learning," Energy, Elsevier, vol. 229(C).
    11. Ussama Assad & Muhammad Arshad Shehzad Hassan & Umar Farooq & Asif Kabir & Muhammad Zeeshan Khan & S. Sabahat H. Bukhari & Zain ul Abidin Jaffri & Judit Oláh & József Popp, 2022. "Smart Grid, Demand Response and Optimization: A Critical Review of Computational Methods," Energies, MDPI, vol. 15(6), pages 1-36, March.
    12. Davarzani, Sima & Pisica, Ioana & Taylor, Gareth A. & Munisami, Kevin J., 2021. "Residential Demand Response Strategies and Applications in Active Distribution Network Management," Renewable and Sustainable Energy Reviews, Elsevier, vol. 138(C).
    13. Wu, Yuankai & Tan, Huachun & Peng, Jiankun & Zhang, Hailong & He, Hongwen, 2019. "Deep reinforcement learning of energy management with continuous control strategy and traffic information for a series-parallel plug-in hybrid electric bus," Applied Energy, Elsevier, vol. 247(C), pages 454-466.
    14. Wen, Lulu & Zhou, Kaile & Li, Jun & Wang, Shanyong, 2020. "Modified deep learning and reinforcement learning for an incentive-based demand response model," Energy, Elsevier, vol. 205(C).
    15. Pallonetto, Fabiano & De Rosa, Mattia & Milano, Federico & Finn, Donal P., 2019. "Demand response algorithms for smart-grid ready residential buildings using machine learning models," Applied Energy, Elsevier, vol. 239(C), pages 1265-1282.
    16. Vo-Van Thanh & Wencong Su & Bin Wang, 2022. "Optimal DC Microgrid Operation with Model Predictive Control-Based Voltage-Dependent Demand Response and Optimal Battery Dispatch," Energies, MDPI, vol. 15(6), pages 1-19, March.
    17. Feifeng Zheng & Zhaojie Wang & Ming Liu, 2022. "Overnight charging scheduling of battery electric buses with uncertain charging time," Operational Research, Springer, vol. 22(5), pages 4865-4903, November.
    18. Xu, Fangyuan & Zhu, Weidong & Wang, Yi Fei & Lai, Chun Sing & Yuan, Haoliang & Zhao, Yujia & Guo, Siming & Fu, Zhengxin, 2022. "A new deregulated demand response scheme for load over-shifting city in regulated power market," Applied Energy, Elsevier, vol. 311(C).
    19. Zhao, Zhonghao & Lee, Carman K.M. & Huo, Jiage, 2023. "EV charging station deployment on coupled transportation and power distribution networks via reinforcement learning," Energy, Elsevier, vol. 267(C).
    20. Kalim Ullah & Sajjad Ali & Taimoor Ahmad Khan & Imran Khan & Sadaqat Jan & Ibrar Ali Shah & Ghulam Hafeez, 2020. "An Optimal Energy Optimization Strategy for Smart Grid Integrated with Renewable Energy Sources and Demand Response Programs," Energies, MDPI, vol. 13(21), pages 1-17, November.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jeners:v:15:y:2022:i:13:p:4582-:d:845742. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.