IDEAS home Printed from https://ideas.repec.org/a/gam/jeners/v15y2022i7p2323-d777424.html
   My bibliography  Save this article

Optimal Management for EV Charging Stations: A Win–Win Strategy for Different Stakeholders Using Constrained Deep Q-Learning

Author

Listed:
  • Athanasios Paraskevas

    (NET2GRID BV, Krystalli 4, 54630 Thessaloniki, Greece)

  • Dimitrios Aletras

    (NET2GRID BV, Krystalli 4, 54630 Thessaloniki, Greece)

  • Antonios Chrysopoulos

    (NET2GRID BV, Krystalli 4, 54630 Thessaloniki, Greece
    School of Electrical and Computer Engineering, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece)

  • Antonios Marinopoulos

    (European Climate, Infrastructure and Environment Executive Agency (CINEA), European Commission, B-1049 Brussels, Belgium)

  • Dimitrios I. Doukas

    (NET2GRID BV, Krystalli 4, 54630 Thessaloniki, Greece)

Abstract

Given the additional awareness of the increasing energy demand and gas emissions’ effects, the decarbonization of the transportation sector is of great significance. In particular, the adoption of electric vehicles (EVs) seems a promising option, under the condition that public charging infrastructure is available. However, devising a pricing and scheduling strategy for public EV charging stations is a non-trivial albeit important task. The reason is that a sub-optimal decision could lead to high waiting times or extreme changes to the power load profile. In addition, in the context of the problem of optimal pricing and scheduling for EV charging stations, the interests of different stakeholders ought to be taken into account (such as those of the station owner and the EV owners). This work proposes a deep reinforcement learning-based (DRL) agent that can optimize pricing and charging control in a public EV charging station under a real-time varying electricity price. The primary goal is to maximize the station’s profits while simultaneously ensuring that the customers’ charging demands are also satisfied. Moreover, the DRL approach is data-driven; it can operate under uncertainties without requiring explicit models of the environment. Variants of scheduling and DRL training algorithms from the literature are also proposed to ensure that both the conflicting objectives are achieved. Experimental results validate the effectiveness of the proposed approach.

Suggested Citation

  • Athanasios Paraskevas & Dimitrios Aletras & Antonios Chrysopoulos & Antonios Marinopoulos & Dimitrios I. Doukas, 2022. "Optimal Management for EV Charging Stations: A Win–Win Strategy for Different Stakeholders Using Constrained Deep Q-Learning," Energies, MDPI, vol. 15(7), pages 1-24, March.
  • Handle: RePEc:gam:jeners:v:15:y:2022:i:7:p:2323-:d:777424
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1996-1073/15/7/2323/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1996-1073/15/7/2323/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Shafique, Muhammad & Azam, Anam & Rafiq, Muhammad & Luo, Xiaowei, 2021. "Investigating the nexus among transport, economic growth and environmental degradation: Evidence from panel ARDL approach," Transport Policy, Elsevier, vol. 109(C), pages 61-71.
    2. Muhammad Shafique & Anam Azam & Muhammad Rafiq & Xiaowei Luo, 2020. "Evaluating the Relationship between Freight Transport, Economic Prosperity, Urbanization, and CO 2 Emissions: Evidence from Hong Kong, Singapore, and South Korea," Sustainability, MDPI, vol. 12(24), pages 1-14, December.
    3. Mohammed Al-Saadi & Josu Olmos & Andoni Saez-de-Ibarra & Joeri Van Mierlo & Maitane Berecibar, 2022. "Fast Charging Impact on the Lithium-Ion Batteries’ Lifetime and Cost-Effective Battery Sizing in Heavy-Duty Electric Vehicles Applications," Energies, MDPI, vol. 15(4), pages 1-23, February.
    4. Stergios Statharas & Yannis Moysoglou & Pelopidas Siskos & Pantelis Capros, 2021. "Simulating the Evolution of Business Models for Electricity Recharging Infrastructure Development by 2030: A Case Study for Greece," Energies, MDPI, vol. 14(9), pages 1-24, April.
    5. Rishabh Ghotge & Yitzhak Snow & Samira Farahani & Zofia Lukszo & Ad van Wijk, 2020. "Optimized Scheduling of EV Charging in Solar Parking Lots for Local Peak Reduction under EV Demand Uncertainty," Energies, MDPI, vol. 13(5), pages 1-18, March.
    6. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    7. Alexandre Lucas & Ricardo Barranco & Nazir Refa, 2019. "EV Idle Time Estimation on Charging Infrastructure, Comparing Supervised Machine Learning Regressions," Energies, MDPI, vol. 12(2), pages 1-17, January.
    8. Ahmad Almaghrebi & Fares Aljuheshi & Mostafa Rafaie & Kevin James & Mahmoud Alahmad, 2020. "Data-Driven Charging Demand Prediction at Public Charging Stations Using Supervised Machine Learning Regression Methods," Energies, MDPI, vol. 13(16), pages 1-21, August.
    9. Jaehyun Lee & Eunjung Lee & Jinho Kim, 2020. "Electric Vehicle Charging and Discharging Algorithm Based on Reinforcement Learning with Data-Driven Approach in Dynamic Pricing Scheme," Energies, MDPI, vol. 13(8), pages 1-18, April.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Zhang, Shulei & Jia, Runda & Pan, Hengxin & Cao, Yankai, 2023. "A safe reinforcement learning-based charging strategy for electric vehicles in residential microgrid," Applied Energy, Elsevier, vol. 348(C).
    2. Aya Amer & Khaled Shaban & Ahmed Massoud, 2022. "Demand Response in HEMSs Using DRL and the Impact of Its Various Configurations and Environmental Changes," Energies, MDPI, vol. 15(21), pages 1-20, November.
    3. Cui, Li & Wang, Qingyuan & Qu, Hongquan & Wang, Mingshen & Wu, Yile & Ge, Le, 2023. "Dynamic pricing for fast charging stations with deep reinforcement learning," Applied Energy, Elsevier, vol. 346(C).
    4. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Ahmad Almaghrebi & Fares Aljuheshi & Mostafa Rafaie & Kevin James & Mahmoud Alahmad, 2020. "Data-Driven Charging Demand Prediction at Public Charging Stations Using Supervised Machine Learning Regression Methods," Energies, MDPI, vol. 13(16), pages 1-21, August.
    2. Buzna, Luboš & De Falco, Pasquale & Ferruzzi, Gabriella & Khormali, Shahab & Proto, Daniela & Refa, Nazir & Straka, Milan & van der Poel, Gijs, 2021. "An ensemble methodology for hierarchical probabilistic electric vehicle load forecasting at regular charging stations," Applied Energy, Elsevier, vol. 283(C).
    3. Adrian Ostermann & Yann Fabel & Kim Ouan & Hyein Koo, 2022. "Forecasting Charging Point Occupancy Using Supervised Learning Algorithms," Energies, MDPI, vol. 15(9), pages 1-23, May.
    4. Sanchari Deb, 2021. "Machine Learning for Solving Charging Infrastructure Planning Problems: A Comprehensive Review," Energies, MDPI, vol. 14(23), pages 1-19, November.
    5. Azam, Anam & Ateeq, Muhammad & Shafique, Muhammad & Rafiq, Muhammad & Yuan, Jiahai, 2023. "Primary energy consumption-growth nexus: The role of natural resources, quality of government, and fixed capital formation," Energy, Elsevier, vol. 263(PA).
    6. Imen Azzouz & Wiem Fekih Hassen, 2023. "Optimization of Electric Vehicles Charging Scheduling Based on Deep Reinforcement Learning: A Decentralized Approach," Energies, MDPI, vol. 16(24), pages 1-18, December.
    7. Jaehyun Lee & Eunjung Lee & Jinho Kim, 2020. "Electric Vehicle Charging and Discharging Algorithm Based on Reinforcement Learning with Data-Driven Approach in Dynamic Pricing Scheme," Energies, MDPI, vol. 13(8), pages 1-18, April.
    8. Dimitrios Vamvakas & Panagiotis Michailidis & Christos Korkas & Elias Kosmatopoulos, 2023. "Review and Evaluation of Reinforcement Learning Frameworks on Smart Grid Applications," Energies, MDPI, vol. 16(14), pages 1-38, July.
    9. Qiu, Dawei & Wang, Yi & Hua, Weiqi & Strbac, Goran, 2023. "Reinforcement learning for electric vehicle applications in power systems:A critical review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 173(C).
    10. Tulika Saha & Sriparna Saha & Pushpak Bhattacharyya, 2020. "Towards sentiment aided dialogue policy learning for multi-intent conversations using hierarchical reinforcement learning," PLOS ONE, Public Library of Science, vol. 15(7), pages 1-28, July.
    11. Byungsung Lee & Haesung Lee & Hyun Ahn, 2020. "Improving Load Forecasting of Electric Vehicle Charging Stations Through Missing Data Imputation," Energies, MDPI, vol. 13(18), pages 1-15, September.
    12. Mahmoud Mahfouz & Angelos Filos & Cyrine Chtourou & Joshua Lockhart & Samuel Assefa & Manuela Veloso & Danilo Mandic & Tucker Balch, 2019. "On the Importance of Opponent Modeling in Auction Markets," Papers 1911.12816, arXiv.org.
    13. Filip Škultéty & Dominika Beňová & Jozef Gnap, 2021. "City Logistics as an Imperative Smart City Mechanism: Scrutiny of Clustered EU27 Capitals," Sustainability, MDPI, vol. 13(7), pages 1-16, March.
    14. Woo Jae Byun & Bumkyu Choi & Seongmin Kim & Joohyun Jo, 2023. "Practical Application of Deep Reinforcement Learning to Optimal Trade Execution," FinTech, MDPI, vol. 2(3), pages 1-16, June.
    15. Lu, Yu & Xiang, Yue & Huang, Yuan & Yu, Bin & Weng, Liguo & Liu, Junyong, 2023. "Deep reinforcement learning based optimal scheduling of active distribution system considering distributed generation, energy storage and flexible load," Energy, Elsevier, vol. 271(C).
    16. Yuhong Wang & Lei Chen & Hong Zhou & Xu Zhou & Zongsheng Zheng & Qi Zeng & Li Jiang & Liang Lu, 2021. "Flexible Transmission Network Expansion Planning Based on DQN Algorithm," Energies, MDPI, vol. 14(7), pages 1-21, April.
    17. Michelle M. LaMar, 2018. "Markov Decision Process Measurement Model," Psychometrika, Springer;The Psychometric Society, vol. 83(1), pages 67-88, March.
    18. Yang, Ting & Zhao, Liyuan & Li, Wei & Zomaya, Albert Y., 2021. "Dynamic energy dispatch strategy for integrated energy system based on improved deep reinforcement learning," Energy, Elsevier, vol. 235(C).
    19. Wang, Yi & Qiu, Dawei & Sun, Mingyang & Strbac, Goran & Gao, Zhiwei, 2023. "Secure energy management of multi-energy microgrid: A physical-informed safe reinforcement learning approach," Applied Energy, Elsevier, vol. 335(C).
    20. Neha Soni & Enakshi Khular Sharma & Narotam Singh & Amita Kapoor, 2019. "Impact of Artificial Intelligence on Businesses: from Research, Innovation, Market Deployment to Future Shifts in Business Models," Papers 1905.02092, arXiv.org.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jeners:v:15:y:2022:i:7:p:2323-:d:777424. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.