IDEAS home Printed from https://ideas.repec.org/a/eee/appene/v384y2025ics0306261925001357.html
   My bibliography  Save this article

A bi-level solution strategy based on distributed proximal policy optimization for transmission and distribution network dispatch with EVs and variable energy

Author

Listed:
  • Lu, Peng
  • Lan, Hanqing
  • Yuan, Qiwei
  • Jiang, Zhihao
  • Cao, Siqi
  • Ding, Jingyi
  • Wei, Qianrun
  • Fan, Junqiu
  • Cai, Quan
  • Zhang, Ning
  • Ye, Lin
  • Li, Kangping
  • Shahidehpour, Mohammad
  • Siano, Pierluigi

Abstract

Integrating large-scale wind power and extensive electric vehicle (EV) loads into the power grid impacts the system's safety and economic operations, posing challenges including frequent changes in grid dispatch instructions, unregulated EV charging and discharging behaviors, and increased network losses. Therefore, a bi-level optimization strategy model employing distributed proximal policy optimization for transmission and distribution network dispatch considering large-scale EVs is established, efficiently managing unit outputs and the system's capacity for charging and discharging, allocating these capabilities to individual nodes in real-time. The upper-level model focuses on minimizing the system's total operating costs, optimizing the operational status of thermal units, and regulating the number of EVs charging and discharging in the transmission network. The lower layer seeks to reduce the distribution network's total network loss costs by optimizing EV charging and discharging power, active/reactive power in branch circuits, and voltage levels at node charging stations. The best solutions for the upper-layer and lower-layer models are solved using the distributed proximal policy optimization (DPPO) method. The bi-level optimization model is tested on a modified IEEE-24 and IEEE-33 system and demonstrated by case studies.

Suggested Citation

  • Lu, Peng & Lan, Hanqing & Yuan, Qiwei & Jiang, Zhihao & Cao, Siqi & Ding, Jingyi & Wei, Qianrun & Fan, Junqiu & Cai, Quan & Zhang, Ning & Ye, Lin & Li, Kangping & Shahidehpour, Mohammad & Siano, Pierl, 2025. "A bi-level solution strategy based on distributed proximal policy optimization for transmission and distribution network dispatch with EVs and variable energy," Applied Energy, Elsevier, vol. 384(C).
  • Handle: RePEc:eee:appene:v:384:y:2025:i:c:s0306261925001357
    DOI: 10.1016/j.apenergy.2025.125405
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0306261925001357
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.apenergy.2025.125405?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Lian, Renzong & Peng, Jiankun & Wu, Yuankai & Tan, Huachun & Zhang, Hailong, 2020. "Rule-interposing deep reinforcement learning based energy management strategy for power-split hybrid electric vehicle," Energy, Elsevier, vol. 197(C).
    2. He, Lifu & Yang, Jun & Yan, Jun & Tang, Yufei & He, Haibo, 2016. "A bi-layer optimization based temporal and spatial scheduling for large-scale electric vehicles," Applied Energy, Elsevier, vol. 168(C), pages 179-192.
    3. Truong, Van Binh & Le, Long Bao, 2024. "Electric vehicle charging design: The factored action based reinforcement learning approach," Applied Energy, Elsevier, vol. 359(C).
    4. Xie, Shiwei & Hu, Zhijian & Wang, Jueying, 2020. "Two-stage robust optimization for expansion planning of active distribution systems coupled with urban transportation networks," Applied Energy, Elsevier, vol. 261(C).
    5. Qiu, Dawei & Wang, Yi & Hua, Weiqi & Strbac, Goran, 2023. "Reinforcement learning for electric vehicle applications in power systems:A critical review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 173(C).
    6. Xu, Bin & Rathod, Dhruvang & Zhang, Darui & Yebi, Adamu & Zhang, Xueyu & Li, Xiaoya & Filipi, Zoran, 2020. "Parametric study on reinforcement learning optimized energy management strategy for a hybrid electric vehicle," Applied Energy, Elsevier, vol. 259(C).
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Xianglong Zhang & Ying Liu & Songlin Gu & Yuzhou Tian & Yifan Gao, 2025. "Event-Driven Edge Agent Framework for Distributed Control in Distribution Networks," Energies, MDPI, vol. 18(11), pages 1-23, May.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Zhu, Tao & Wills, Richard G.A. & Lot, Roberto & Ruan, Haijun & Jiang, Zhihao, 2021. "Adaptive energy management of a battery-supercapacitor energy storage system for electric vehicles based on flexible perception and neural network fitting," Applied Energy, Elsevier, vol. 292(C).
    2. Daniel Egan & Qilun Zhu & Robert Prucka, 2023. "A Review of Reinforcement Learning-Based Powertrain Controllers: Effects of Agent Selection for Mixed-Continuity Control and Reward Formulation," Energies, MDPI, vol. 16(8), pages 1-31, April.
    3. Han, Lijin & You, Congwen & Yang, Ningkang & Liu, Hui & Chen, Ke & Xiang, Changle, 2024. "Adaptive real-time energy management strategy using heuristic search for off-road hybrid electric vehicles," Energy, Elsevier, vol. 304(C).
    4. Penghui Qiang & Peng Wu & Tao Pan & Huaiquan Zang, 2021. "Real-Time Approximate Equivalent Consumption Minimization Strategy Based on the Single-Shaft Parallel Hybrid Powertrain," Energies, MDPI, vol. 14(23), pages 1-22, November.
    5. Chen, Jiaxin & Shu, Hong & Tang, Xiaolin & Liu, Teng & Wang, Weida, 2022. "Deep reinforcement learning-based multi-objective control of hybrid power system combined with road recognition under time-varying environment," Energy, Elsevier, vol. 239(PC).
    6. Shi, Dehua & Xu, Han & Wang, Shaohua & Hu, Jia & Chen, Long & Yin, Chunfang, 2024. "Deep reinforcement learning based adaptive energy management for plug-in hybrid electric vehicle with double deep Q-network," Energy, Elsevier, vol. 305(C).
    7. Dong, Peng & Zhao, Junwei & Liu, Xuewu & Wu, Jian & Xu, Xiangyang & Liu, Yanfang & Wang, Shuhan & Guo, Wei, 2022. "Practical application of energy management strategy for hybrid electric vehicles based on intelligent and connected technologies: Development stages, challenges, and future trends," Renewable and Sustainable Energy Reviews, Elsevier, vol. 170(C).
    8. Marouane Adnane & Ahmed Khoumsi & João Pedro F. Trovão, 2023. "Efficient Management of Energy Consumption of Electric Vehicles Using Machine Learning—A Systematic and Comprehensive Survey," Energies, MDPI, vol. 16(13), pages 1-39, June.
    9. Connor Scott & Mominul Ahsan & Alhussein Albarbar, 2021. "Machine Learning Based Vehicle to Grid Strategy for Improving the Energy Performance of Public Buildings," Sustainability, MDPI, vol. 13(7), pages 1-22, April.
    10. Qi, Chunyang & Song, Chuanxue & Xiao, Feng & Song, Shixin, 2022. "Generalization ability of hybrid electric vehicle energy management strategy based on reinforcement learning method," Energy, Elsevier, vol. 250(C).
    11. Han, Lijin & Yang, Ke & Ma, Tian & Yang, Ningkang & Liu, Hui & Guo, Lingxiong, 2022. "Battery life constrained real-time energy management strategy for hybrid electric vehicles based on reinforcement learning," Energy, Elsevier, vol. 259(C).
    12. Zhang, Dehai & Li, Junhui & Guo, Ningyuan & Liu, Yonggang & Shen, Shiquan & Wei, Fuxing & Chen, Zheng & Zheng, Jia, 2024. "Adaptive deep reinforcement learning energy management for hybrid electric vehicles considering driving condition recognition," Energy, Elsevier, vol. 313(C).
    13. Wang, Yaxin & Lou, Diming & Xu, Ning & Fang, Liang & Tan, Piqiang, 2021. "Energy management and emission control for range extended electric vehicles," Energy, Elsevier, vol. 236(C).
    14. Kong, Yan & Xu, Nan & Liu, Qiao & Sui, Yan & Yue, Fenglai, 2023. "A data-driven energy management method for parallel PHEVs based on action dependent heuristic dynamic programming (ADHDP) model," Energy, Elsevier, vol. 265(C).
    15. Chen, Jiaxin & Tang, Xiaolin & Wang, Meng & Li, Cheng & Li, Zhangyong & Qin, Yechen, 2025. "Enhanced applicability of reinforcement learning-based energy management by pivotal state-based Markov trajectories," Energy, Elsevier, vol. 319(C).
    16. Li, Jie & Wu, Xiaodong & Xu, Min & Liu, Yonggang, 2022. "Deep reinforcement learning and reward shaping based eco-driving control for automated HEVs among signalized intersections," Energy, Elsevier, vol. 251(C).
    17. Tao, Fazhan & Fu, Zhigao & Gong, Huixian & Ji, Baofeng & Zhou, Yao, 2023. "Twin delayed deep deterministic policy gradient based energy management strategy for fuel cell/battery/ultracapacitor hybrid electric vehicles considering predicted terrain information," Energy, Elsevier, vol. 283(C).
    18. Liu, Teng & Tan, Wenhao & Tang, Xiaolin & Zhang, Jinwei & Xing, Yang & Cao, Dongpu, 2021. "Driving conditions-driven energy management strategies for hybrid electric vehicles: A review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 151(C).
    19. Niu, Junyan & Zhuang, Weichao & Ye, Jianwei & Song, Ziyou & Yin, Guodong & Zhang, Yuanjian, 2022. "Optimal sizing and learning-based energy management strategy of NCR/LTO hybrid battery system for electric taxis," Energy, Elsevier, vol. 257(C).
    20. Liu, Yonggang & Wu, Yitao & Wang, Xiangyu & Li, Liang & Zhang, Yuanjian & Chen, Zheng, 2023. "Energy management for hybrid electric vehicles based on imitation reinforcement learning," Energy, Elsevier, vol. 263(PC).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:384:y:2025:i:c:s0306261925001357. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.