IDEAS home Printed from https://ideas.repec.org/a/eee/energy/v323y2025ics0360544225013118.html
   My bibliography  Save this article

Hierarchical deep reinforcement learning based multi-agent game control for energy consumption and traffic efficiency improving of autonomous vehicles

Author

Listed:
  • Chen, Xiang
  • Wang, Xu
  • Zhao, Wanzhong
  • Wang, Chunyan
  • Cheng, Shuo
  • Luan, Zhongkai

Abstract

To achieve highly autonomous driving while ensuring eco-driving, this paper proposes a Hierarchical Multi-Agent Deep Reinforcement Learning framework to optimize energy consumption and traffic efficiency for autonomous vehicles. In this framework, driving, braking, traffic efficiency, and energy management are modeled as independent agents within a game-theoretic framework. Distinct reward functions are designed to establish cooperative and competitive relationships among the agents based on training objectives. Initially, path planning and obstacle detection are implemented in the CARLA simulation environment, where deep learning algorithms enhance trajectory tracking and real-time decision-making. Incorporating complex urban environmental factors such as traffic signals and vehicle interactions, a multi-objective hierarchical optimization strategy is proposed to balance energy consumption, traffic efficiency, and driving safety. For energy management, an expert-knowledge-guided multi-agent learning mechanism is introduced to reduce the search space and accelerate convergence, achieving improved energy efficiency and decision-making stability. Simulation results demonstrate that, compared to the traditional Multi-Agent Twin Delayed Deep Deterministic Policy Gradient (MATD3) method, the proposed Expert-MATD3 method reduces energy consumption by 10%, shortens travel time by approximately 20.37%, and exhibits the slowest state of charge(SOC) decline, further demonstrating its superior energy management efficiency while maintaining a high level of driving safety. Moreover, the method exhibits strong generalization capability and real-time performance, providing a promising approach for sustainable and efficient autonomous driving.

Suggested Citation

  • Chen, Xiang & Wang, Xu & Zhao, Wanzhong & Wang, Chunyan & Cheng, Shuo & Luan, Zhongkai, 2025. "Hierarchical deep reinforcement learning based multi-agent game control for energy consumption and traffic efficiency improving of autonomous vehicles," Energy, Elsevier, vol. 323(C).
  • Handle: RePEc:eee:energy:v:323:y:2025:i:c:s0360544225013118
    DOI: 10.1016/j.energy.2025.135669
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0360544225013118
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.energy.2025.135669?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to

    for a different version of it.

    References listed on IDEAS

    as
    1. Sun, Wenjing & Zou, Yuan & Zhang, Xudong & Guo, Ningyuan & Zhang, Bin & Du, Guodong, 2022. "High robustness energy management strategy of hybrid electric vehicle based on improved soft actor-critic deep reinforcement learning," Energy, Elsevier, vol. 258(C).
    2. Yao, Zhihong & Wang, Yi & Liu, Bo & Zhao, Bin & Jiang, Yangsheng, 2021. "Fuel consumption and transportation emissions evaluation of mixed traffic flow with connected automated vehicles and human-driven vehicles on expressway," Energy, Elsevier, vol. 230(C).
    3. Dong, Haoxuan & Zhuang, Weichao & Chen, Boli & Wang, Yan & Lu, Yanbo & Liu, Ying & Xu, Liwei & Yin, Guodong, 2022. "A comparative study of energy-efficient driving strategy for connected internal combustion engine and electric vehicles at signalized intersections," Applied Energy, Elsevier, vol. 310(C).
    4. Zhang, Hailong & Peng, Jiankun & Dong, Hanxuan & Tan, Huachun & Ding, Fan, 2023. "Hierarchical reinforcement learning based energy management strategy of plug-in hybrid electric vehicle for ecological car-following process," Applied Energy, Elsevier, vol. 333(C).
    5. Zhou, Jianhao & Xue, Siwu & Xue, Yuan & Liao, Yuhui & Liu, Jun & Zhao, Wanzhong, 2021. "A novel energy management strategy of hybrid electric vehicle via an improved TD3 deep reinforcement learning," Energy, Elsevier, vol. 224(C).
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Zhang, Dongfang & Sun, Wei & Zou, Yuan & Zhang, Xudong, 2025. "Energy management in HDHEV with dual APUs: Enhancing soft actor-critic using clustered experience replay and multi-dimensional priority sampling," Energy, Elsevier, vol. 319(C).
    2. Dong, Haoxuan & Shi, Junzhe & Zhuang, Weichao & Li, Zhaojian & Song, Ziyou, 2025. "Analyzing the impact of mixed vehicle platoon formations on vehicle energy and traffic efficiencies," Applied Energy, Elsevier, vol. 377(PA).
    3. Fan Wang & Yina Hong & Xiaohuan Zhao, 2025. "Research and Comparative Analysis of Energy Management Strategies for Hybrid Electric Vehicles: A Review," Energies, MDPI, vol. 18(11), pages 1-28, May.
    4. Wang, Hanchen & Arjmandzadeh, Ziba & Ye, Yiming & Zhang, Jiangfeng & Xu, Bin, 2024. "FlexNet: A warm start method for deep reinforcement learning in hybrid electric vehicle energy management applications," Energy, Elsevier, vol. 288(C).
    5. Chen, Fujun & Wang, Bowen & Ni, Meng & Gong, Zhichao & Jiao, Kui, 2024. "Online energy management strategy for ammonia-hydrogen hybrid electric vehicles harnessing deep reinforcement learning," Energy, Elsevier, vol. 301(C).
    6. Liu, Zemin Eitan & Li, Yong & Zhou, Quan & Shuai, Bin & Hua, Min & Xu, Hongming & Xu, Lubing & Tan, Guikun & Li, Yanfei, 2025. "Real-time energy management for HEV combining naturalistic driving data and deep reinforcement learning with high generalization," Applied Energy, Elsevier, vol. 377(PA).
    7. Li, Jie & Wu, Xiaodong & Fan, Jiawei & Liu, Yonggang & Xu, Min, 2023. "Overcoming driving challenges in complex urban traffic: A multi-objective eco-driving strategy via safety model based reinforcement learning," Energy, Elsevier, vol. 284(C).
    8. Wang, Jinhai & Du, Changqing & Yan, Fuwu & Hua, Min & Gongye, Xiangyu & Yuan, Quan & Xu, Hongming & Zhou, Quan, 2025. "Bayesian optimization for hyper-parameter tuning of an improved twin delayed deep deterministic policy gradients based energy management strategy for plug-in hybrid electric vehicles," Applied Energy, Elsevier, vol. 381(C).
    9. Liu, Yonggang & Wu, Yitao & Wang, Xiangyu & Li, Liang & Zhang, Yuanjian & Chen, Zheng, 2023. "Energy management for hybrid electric vehicles based on imitation reinforcement learning," Energy, Elsevier, vol. 263(PC).
    10. Lifeng Wang & Hu Liang & Yuxin Jian & Qiang Luo & Xiaoxiang Gong & Yiwei Zhang, 2024. "Optimized path planning and scheduling strategies for connected and automated vehicles at single-lane roundabouts," PLOS ONE, Public Library of Science, vol. 19(8), pages 1-20, August.
    11. Umme Mumtahina & Sanath Alahakoon & Peter Wolfs, 2025. "A Day-Ahead Optimal Battery Scheduling Considering the Grid Stability of Distribution Feeders," Energies, MDPI, vol. 18(5), pages 1-20, February.
    12. Salvini, Pericle & Kunze, Lars & Jirotka, Marina, 2024. "On self-driving cars and its (broken?) promises. A case study analysis of the German Act on Autonomous Driving," Technology in Society, Elsevier, vol. 78(C).
    13. Tang, Tianfeng & Peng, Qianlong & Shi, Qing & Peng, Qingguo & Zhao, Jin & Chen, Chaoyi & Wang, Guangwei, 2024. "Energy management of fuel cell hybrid electric bus in mountainous regions: A deep reinforcement learning approach considering terrain characteristics," Energy, Elsevier, vol. 311(C).
    14. Daniel Egan & Qilun Zhu & Robert Prucka, 2023. "A Review of Reinforcement Learning-Based Powertrain Controllers: Effects of Agent Selection for Mixed-Continuity Control and Reward Formulation," Energies, MDPI, vol. 16(8), pages 1-31, April.
    15. Penghui Qiang & Peng Wu & Tao Pan & Huaiquan Zang, 2021. "Real-Time Approximate Equivalent Consumption Minimization Strategy Based on the Single-Shaft Parallel Hybrid Powertrain," Energies, MDPI, vol. 14(23), pages 1-22, November.
    16. Zhang, Hao & Chen, Boli & Lei, Nuo & Li, Bingbing & Chen, Chaoyi & Wang, Zhi, 2024. "Coupled velocity and energy management optimization of connected hybrid electric vehicles for maximum collective efficiency," Applied Energy, Elsevier, vol. 360(C).
    17. Xi, Lei & Shi, Yu & Quan, Yue & Liu, Zhihong, 2024. "Research on the multi-area cooperative control method for novel power systems," Energy, Elsevier, vol. 313(C).
    18. Li, Jie & Fotouhi, Abbas & Pan, Wenjun & Liu, Yonggang & Zhang, Yuanjian & Chen, Zheng, 2023. "Deep reinforcement learning-based eco-driving control for connected electric vehicles at signalized intersections considering traffic uncertainties," Energy, Elsevier, vol. 279(C).
    19. Kang, Hyuna & Jung, Seunghoon & Kim, Hakpyeong & Jeoung, Jaewon & Hong, Taehoon, 2024. "Reinforcement learning-based optimal scheduling model of battery energy storage system at the building level," Renewable and Sustainable Energy Reviews, Elsevier, vol. 190(PA).
    20. Zhang, Hao & Lei, Nuo & Chen, Boli & Li, Bingbing & Li, Rulong & Wang, Zhi, 2024. "Modeling and control system optimization for electrified vehicles: A data-driven approach," Energy, Elsevier, vol. 310(C).

    More about this item

    Keywords

    ;
    ;
    ;
    ;

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:energy:v:323:y:2025:i:c:s0360544225013118. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.journals.elsevier.com/energy .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.