IDEAS home Printed from https://ideas.repec.org/a/eee/appene/v399y2025ics0306261925012255.html

Dynamic ad hoc teaming and mutual distillation for cooperative learning of powertrain control policies for vehicle fleets

Author

Listed:
  • Kerbel, Lindsey
  • Ayalew, Beshah
  • Ivanco, Andrej

Abstract

Data-driven deep reinforcement learning (DRL)-based approaches have shown significant potential for improving the performance of vehicle control systems, in terms of energy consumption and other metrics, by allowing adaptation to the environments in which the vehicles are deployed. However, training DRL policies that work well in highly dynamic real-world environments is challenged by data efficiency and learning stability issues accompanied by high variances in performance. In this paper, we propose a novel cooperative learning approach to improve learning performance and reduce variances by continuously sharing experiences among powertrain control agents for a fleet of vehicles. The key contribution is the concept of a dynamic ad hoc teaming mechanism for decentralized and scalable mutual knowledge distillation between vehicles serving a distribution of routes. Our approach enables an asynchronous implementation that can operate whenever connectivity is available, thus removing a constraint for practical adoption. We compare two variants of the proposed framework with two other state-of-the-art alternatives in three scenarios that represent various deployments for a fleet. We find that the proposed framework significantly accelerates learning by reducing variances and improves long-term fleet mean total cycle rewards by up to 14 % compared to a baseline of individually learning agents. This improvement is on the same order as that achieved with centralized shared learning approaches, but without suffering their limitations of computational complexity and poor scalability. We also find that the proposed shared learning approach improves the adaptability of vehicle control agents to unfamiliar routes.

Suggested Citation

  • Kerbel, Lindsey & Ayalew, Beshah & Ivanco, Andrej, 2025. "Dynamic ad hoc teaming and mutual distillation for cooperative learning of powertrain control policies for vehicle fleets," Applied Energy, Elsevier, vol. 399(C).
  • Handle: RePEc:eee:appene:v:399:y:2025:i:c:s0306261925012255
    DOI: 10.1016/j.apenergy.2025.126495
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0306261925012255
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.apenergy.2025.126495?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to

    for a different version of it.

    References listed on IDEAS

    as
    1. Liu, Yonggang & Wu, Yitao & Wang, Xiangyu & Li, Liang & Zhang, Yuanjian & Chen, Zheng, 2023. "Energy management for hybrid electric vehicles based on imitation reinforcement learning," Energy, Elsevier, vol. 263(PC).
    2. Zhang, Hailong & Peng, Jiankun & Dong, Hanxuan & Tan, Huachun & Ding, Fan, 2023. "Hierarchical reinforcement learning based energy management strategy of plug-in hybrid electric vehicle for ecological car-following process," Applied Energy, Elsevier, vol. 333(C).
    3. Handong Li & Xuewu Dai & Stephen Goldrick & Richard Kotter & Nauman Aslam & Saleh Ali, 2024. "Reinforcement Learning for EV Fleet Smart Charging with On-Site Renewable Energy Sources," Energies, MDPI, vol. 17(21), pages 1-21, October.
    4. Wei, Hongqian & Zhang, Nan & Liang, Jun & Ai, Qiang & Zhao, Wenqiang & Huang, Tianyi & Zhang, Youtong, 2022. "Deep reinforcement learning based direct torque control strategy for distributed drive electric vehicles considering active safety and energy saving performance," Energy, Elsevier, vol. 238(PB).
    5. Kerbel, Lindsey & Ayalew, Beshah & Ivanco, Andrej, 2024. "Shared learning of powertrain control policies for vehicle fleets," Applied Energy, Elsevier, vol. 365(C).
    6. Zhengyu Yao & Hwan-Sik Yoon & Yang-Ki Hong, 2023. "Control of Hybrid Electric Vehicle Powertrain Using Offline-Online Hybrid Reinforcement Learning," Energies, MDPI, vol. 16(2), pages 1-18, January.
    7. Ruan, Jiageng & Wu, Changcheng & Liang, Zhaowen & Liu, Kai & Li, Bin & Li, Weihan & Li, Tongyang, 2023. "The application of machine learning-based energy management strategy in a multi-mode plug-in hybrid electric vehicle, part II: Deep deterministic policy gradient algorithm design for electric mode," Energy, Elsevier, vol. 269(C).
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Zhang, Hao & Lei, Nuo & Chen, Boli & Li, Bingbing & Li, Rulong & Wang, Zhi, 2024. "Modeling and control system optimization for electrified vehicles: A data-driven approach," Energy, Elsevier, vol. 310(C).
    2. Fan Wang & Yina Hong & Xiaohuan Zhao, 2025. "Research and Comparative Analysis of Energy Management Strategies for Hybrid Electric Vehicles: A Review," Energies, MDPI, vol. 18(11), pages 1-28, May.
    3. Ma, Zhikai & Huo, Qian & Wang, Wei & Zhang, Tao, 2023. "Voltage-temperature aware thermal runaway alarming framework for electric vehicles via deep learning with attention mechanism in time-frequency domain," Energy, Elsevier, vol. 278(C).
    4. Lei, Nuo & Zhang, Hao & Hu, Jingjing & Hu, Zunyan & Wang, Zhi, 2025. "Sim-to-real design and development of reinforcement learning-based energy management strategies for fuel cell electric vehicles," Applied Energy, Elsevier, vol. 393(C).
    5. Louback, Eduardo & Biswas, Atriya & Machado, Fabricio & Emadi, Ali, 2024. "A review of the design process of energy management systems for dual-motor battery electric vehicles," Renewable and Sustainable Energy Reviews, Elsevier, vol. 193(C).
    6. Wu, Jiajun & Liu, Hui & Ren, Xiaolei & Nie, Shida & Qin, Yechen & Han, Lijin, 2025. "A multi-objective optimization approach for regenerative braking control in electric vehicles using MPE-SAC algorithm," Energy, Elsevier, vol. 318(C).
    7. Han, Lijin & You, Congwen & Yang, Ningkang & Liu, Hui & Chen, Ke & Xiang, Changle, 2024. "Adaptive real-time energy management strategy using heuristic search for off-road hybrid electric vehicles," Energy, Elsevier, vol. 304(C).
    8. Yang, Ningkang & Han, Lijin & Bo, Lin & Liu, Baoshuai & Chen, Xiuqi & Liu, Hui & Xiang, Changle, 2023. "Real-time adaptive energy management for off-road hybrid electric vehicles based on decision-time planning," Energy, Elsevier, vol. 282(C).
    9. Wenna Xu & Hao Huang & Chun Wang & Yixin Hu & Xinmei Gao, 2025. "Research on Multi-Objective Parameter Matching and Stepwise Energy Management Strategies for Hybrid Energy Storage Systems," Energies, MDPI, vol. 18(6), pages 1-22, March.
    10. Jia, Yuan & Liu, Yonggang & Zhang, Yuanjian & Chen, Zheng & Zhang, Yi, 2025. "Longitudinal-vertical integrated cooperative control of distributed drive electric vehicle considering optimization of energy economy and comfort," Energy, Elsevier, vol. 340(C).
    11. Chen, Fujun & Wang, Bowen & Ni, Meng & Gong, Zhichao & Jiao, Kui, 2024. "Online energy management strategy for ammonia-hydrogen hybrid electric vehicles harnessing deep reinforcement learning," Energy, Elsevier, vol. 301(C).
    12. Zhang, Hao & Chen, Boli & Lei, Nuo & Li, Bingbing & Chen, Chaoyi & Wang, Zhi, 2024. "Coupled velocity and energy management optimization of connected hybrid electric vehicles for maximum collective efficiency," Applied Energy, Elsevier, vol. 360(C).
    13. Iqbal, Najam & He, Guanzhang & Wang, Hu & Lin, Zhiqiang & Zheng, Zunqing & Yao, Mingfa, 2025. "Holistic energy management strategy for hybrid electric heavy-duty vehicles based on proximal policy optimization with the consideration of cabin temperature comfort," Energy, Elsevier, vol. 326(C).
    14. Yu, Sichen & Peng, Jiankun & Zhou, Jiaxuan & Ren, Tinghui & Wu, Jingda & Fan, Yi, 2025. "Durability-enhanced decision-making with style awareness for autonomous hydrogen fuel cell vehicle based on integrated reinforcement learning approaches," Energy, Elsevier, vol. 336(C).
    15. Seydali Ferahtia & Hegazy Rezk & Rania M. Ghoniem & Ahmed Fathy & Reem Alkanhel & Mohamed M. Ghonem, 2023. "Optimal Energy Management for Hydrogen Economy in a Hybrid Electric Vehicle," Sustainability, MDPI, vol. 15(4), pages 1-19, February.
    16. Huang, Ruchen & He, Hongwen & Su, Qicong & Wu, Jingda, 2025. "Towards sustainable and intelligent urban transportation: A novel deep transfer reinforcement learning framework for eco-driving of fuel cell buses," Energy, Elsevier, vol. 330(C).
    17. Zhang, Baodi & Chang, Liang & Teng, Teng & Chen, Qifang & Li, Qiangwei & Cao, Yaoguang & Yang, Shichun & Zhang, Xin, 2024. "Multi-objective optimization with Q-learning for cruise and power allocation control parameters of connected fuel cell hybrid vehicles," Applied Energy, Elsevier, vol. 373(C).
    18. Dagang Lu & Yu Chen & Yan Sun & Wenxuan Wei & Shilin Ji & Hongshuo Ruan & Fengyan Yi & Chunchun Jia & Donghai Hu & Kunpeng Tang & Song Huang & Jing Wang, 2025. "Research Progress in Multi-Domain and Cross-Domain AI Management and Control for Intelligent Electric Vehicles," Energies, MDPI, vol. 18(17), pages 1-52, August.
    19. Hu, Dong & Huang, Chao & Yin, Guodong & Li, Yangmin & Huang, Yue & Huang, Hailong & Wu, Jingda & Li, Wenfei & Xie, Hui, 2024. "A transfer-based reinforcement learning collaborative energy management strategy for extended-range electric buses with cabin temperature comfort consideration," Energy, Elsevier, vol. 290(C).
    20. Liu, Weirong & Yao, Pengfei & Wu, Yue & Duan, Lijun & Li, Heng & Peng, Jun, 2025. "Imitation reinforcement learning energy management for electric vehicles with hybrid energy storage system," Applied Energy, Elsevier, vol. 378(PA).

    More about this item

    Keywords

    ;
    ;
    ;
    ;
    ;

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:399:y:2025:i:c:s0306261925012255. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.