IDEAS home Printed from https://ideas.repec.org/a/gam/jeners/v16y2023i16p6067-d1220460.html
   My bibliography  Save this article

Multi-Agent DDPG Based Electric Vehicles Charging Station Recommendation

Author

Listed:
  • Khalil Bachiri

    (ETIS Laboratory, CNRS, ENSEA, CY TECH, CY Cergy Paris University, 95011 Cergy, France
    LISAC Laboratory, Sidi Mohammed Ben Abdellah University, Fez 30000, Morocco)

  • Ali Yahyaouy

    (LISAC Laboratory, Sidi Mohammed Ben Abdellah University, Fez 30000, Morocco)

  • Hamid Gualous

    (LUSAC Laboratory, University of Caen Normandie, 14032 Caen, France)

  • Maria Malek

    (ETIS Laboratory, CNRS, ENSEA, CY TECH, CY Cergy Paris University, 95011 Cergy, France)

  • Younes Bennani

    (LIPN Laboratory—CNRS UMR 7030, La Maison des Sciences Numériques, University of Sorbonne Paris Nord, 93000 Paris, France)

  • Philippe Makany

    (LUSAC Laboratory, University of Caen Normandie, 14032 Caen, France)

  • Nicoleta Rogovschi

    (LIPADE Laboratory, University of Paris Descartes, 75006 Paris, France)

Abstract

Electric vehicles (EVs) are a sustainable transportation solution with environmental benefits and energy efficiency. However, their popularity has raised challenges in locating appropriate charging stations, especially in cities with limited infrastructure and dynamic charging demands. To address this, we propose a multi-agent deep deterministic policy gradient (MADDPG) method for optimal EV charging station recommendations, considering real-time traffic conditions. Our approach aims to minimize total travel time in a stochastic environment for efficient smart transportation management. We adopt a centralized learning and decentralized execution strategy, treating each region of charging stations as an individual agent. Agents cooperate to recommend optimal charging stations based on various incentive functions and competitive contexts. The problem is modeled as a Markov game, suitable for analyzing multi-agent decisions in stochastic environments. Intelligent transportation systems provide us with traffic information, and each charging station feeds relevant data to the agents. Our MADDPG method is challenged with a substantial number of EV requests, enabling efficient handling of dynamic charging demands. Simulation experiments compare our method with DDPG and deterministic approaches, considering different distributions and EV numbers. The results highlight MADDPG’s superiority, emphasizing its value for sustainable urban mobility and efficient EV charging station scheduling.

Suggested Citation

  • Khalil Bachiri & Ali Yahyaouy & Hamid Gualous & Maria Malek & Younes Bennani & Philippe Makany & Nicoleta Rogovschi, 2023. "Multi-Agent DDPG Based Electric Vehicles Charging Station Recommendation," Energies, MDPI, vol. 16(16), pages 1-17, August.
  • Handle: RePEc:gam:jeners:v:16:y:2023:i:16:p:6067-:d:1220460
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1996-1073/16/16/6067/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1996-1073/16/16/6067/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Chen, Zheng & Hu, Hengjie & Wu, Yitao & Zhang, Yuanjian & Li, Guang & Liu, Yonggang, 2020. "Stochastic model predictive control for energy management of power-split plug-in hybrid electric vehicles based on reinforcement learning," Energy, Elsevier, vol. 211(C).
    2. Sunyong Kim & Hyuk Lim, 2018. "Reinforcement Learning Based Energy Management Algorithm for Smart Energy Buildings," Energies, MDPI, vol. 11(8), pages 1-19, August.
    3. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    4. Li Zhang & Ke Gong & Maozeng Xu, 2019. "Congestion Control in Charging Stations Allocation with Q-Learning," Sustainability, MDPI, vol. 11(14), pages 1-11, July.
    5. Peng Han & Jinkuan Wang & Yinghua Han & Yan Li, 2014. "Resident Plug-In Electric Vehicle Charging Modeling and Scheduling Mechanism in the Smart Grid," Mathematical Problems in Engineering, Hindawi, vol. 2014, pages 1-8, January.
    6. Amad Ali & Rabia Shakoor & Abdur Raheem & Hafiz Abd ul Muqeet & Qasim Awais & Ashraf Ali Khan & Mohsin Jamil, 2022. "Latest Energy Storage Trends in Multi-Energy Standalone Electric Vehicle Charging Stations: A Comprehensive Study," Energies, MDPI, vol. 15(13), pages 1-19, June.
    7. Maksymilian Mądziel & Tiziana Campisi, 2023. "Energy Consumption of Electric Vehicles: Analysis of Selected Parameters Based on Created Database," Energies, MDPI, vol. 16(3), pages 1-18, February.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Jiachen Li & Xingfeng Duan & Zhennan Xiong & Peng Yao, 2024. "Tugboat Scheduling Method Based on the NRPER-DDPG Algorithm: An Integrated DDPG Algorithm with Prioritized Experience Replay and Noise Reduction," Sustainability, MDPI, vol. 16(8), pages 1-27, April.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Daniel Egan & Qilun Zhu & Robert Prucka, 2023. "A Review of Reinforcement Learning-Based Powertrain Controllers: Effects of Agent Selection for Mixed-Continuity Control and Reward Formulation," Energies, MDPI, vol. 16(8), pages 1-31, April.
    2. Junchi Ma & Yuan Zhang & Zongtao Duan & Lei Tang, 2023. "PROLIFIC: Deep Reinforcement Learning for Efficient EV Fleet Scheduling and Charging," Sustainability, MDPI, vol. 15(18), pages 1-22, September.
    3. Ki-Beom Lee & Mohamed A. Ahmed & Dong-Ki Kang & Young-Chon Kim, 2020. "Deep Reinforcement Learning Based Optimal Route and Charging Station Selection," Energies, MDPI, vol. 13(23), pages 1-22, November.
    4. Svetozarevic, B. & Baumann, C. & Muntwiler, S. & Di Natale, L. & Zeilinger, M.N. & Heer, P., 2022. "Data-driven control of room temperature and bidirectional EV charging using deep reinforcement learning: Simulations and experiments," Applied Energy, Elsevier, vol. 307(C).
    5. Ritu Kandari & Neeraj Neeraj & Alexander Micallef, 2022. "Review on Recent Strategies for Integrating Energy Storage Systems in Microgrids," Energies, MDPI, vol. 16(1), pages 1-24, December.
    6. Ahmed M. Abed & Ali AlArjani, 2022. "The Neural Network Classifier Works Efficiently on Searching in DQN Using the Autonomous Internet of Things Hybridized by the Metaheuristic Techniques to Reduce the EVs’ Service Scheduling Time," Energies, MDPI, vol. 15(19), pages 1-25, September.
    7. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    8. Yujian Ye & Dawei Qiu & Huiyu Wang & Yi Tang & Goran Strbac, 2021. "Real-Time Autonomous Residential Demand Response Management Based on Twin Delayed Deep Deterministic Policy Gradient Learning," Energies, MDPI, vol. 14(3), pages 1-22, January.
    9. Ying Ji & Jianhui Wang & Jiacan Xu & Xiaoke Fang & Huaguang Zhang, 2019. "Real-Time Energy Management of a Microgrid Using Deep Reinforcement Learning," Energies, MDPI, vol. 12(12), pages 1-21, June.
    10. Tulika Saha & Sriparna Saha & Pushpak Bhattacharyya, 2020. "Towards sentiment aided dialogue policy learning for multi-intent conversations using hierarchical reinforcement learning," PLOS ONE, Public Library of Science, vol. 15(7), pages 1-28, July.
    11. Mahmoud Mahfouz & Angelos Filos & Cyrine Chtourou & Joshua Lockhart & Samuel Assefa & Manuela Veloso & Danilo Mandic & Tucker Balch, 2019. "On the Importance of Opponent Modeling in Auction Markets," Papers 1911.12816, arXiv.org.
    12. Alqahtani, Mohammed & Hu, Mengqi, 2022. "Dynamic energy scheduling and routing of multiple electric vehicles using deep reinforcement learning," Energy, Elsevier, vol. 244(PA).
    13. Woo Jae Byun & Bumkyu Choi & Seongmin Kim & Joohyun Jo, 2023. "Practical Application of Deep Reinforcement Learning to Optimal Trade Execution," FinTech, MDPI, vol. 2(3), pages 1-16, June.
    14. Lu, Yu & Xiang, Yue & Huang, Yuan & Yu, Bin & Weng, Liguo & Liu, Junyong, 2023. "Deep reinforcement learning based optimal scheduling of active distribution system considering distributed generation, energy storage and flexible load," Energy, Elsevier, vol. 271(C).
    15. Yuhong Wang & Lei Chen & Hong Zhou & Xu Zhou & Zongsheng Zheng & Qi Zeng & Li Jiang & Liang Lu, 2021. "Flexible Transmission Network Expansion Planning Based on DQN Algorithm," Energies, MDPI, vol. 14(7), pages 1-21, April.
    16. Michelle M. LaMar, 2018. "Markov Decision Process Measurement Model," Psychometrika, Springer;The Psychometric Society, vol. 83(1), pages 67-88, March.
    17. Yang, Ting & Zhao, Liyuan & Li, Wei & Zomaya, Albert Y., 2021. "Dynamic energy dispatch strategy for integrated energy system based on improved deep reinforcement learning," Energy, Elsevier, vol. 235(C).
    18. Wang, Yi & Qiu, Dawei & Sun, Mingyang & Strbac, Goran & Gao, Zhiwei, 2023. "Secure energy management of multi-energy microgrid: A physical-informed safe reinforcement learning approach," Applied Energy, Elsevier, vol. 335(C).
    19. Neha Soni & Enakshi Khular Sharma & Narotam Singh & Amita Kapoor, 2019. "Impact of Artificial Intelligence on Businesses: from Research, Innovation, Market Deployment to Future Shifts in Business Models," Papers 1905.02092, arXiv.org.
    20. Ande Chang & Yuting Ji & Chunguang Wang & Yiming Bie, 2024. "CVDMARL: A Communication-Enhanced Value Decomposition Multi-Agent Reinforcement Learning Traffic Signal Control Method," Sustainability, MDPI, vol. 16(5), pages 1-17, March.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jeners:v:16:y:2023:i:16:p:6067-:d:1220460. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.