IDEAS home Printed from https://ideas.repec.org/a/eee/appene/v365y2024ics0306261924006007.html
   My bibliography  Save this article

Shared learning of powertrain control policies for vehicle fleets

Author

Listed:
  • Kerbel, Lindsey
  • Ayalew, Beshah
  • Ivanco, Andrej

Abstract

Emerging data-driven approaches, such as deep reinforcement learning (DRL), aim at on-the-field learning of powertrain control policies that optimize fuel economy and other performance metrics. Indeed, they have shown great potential in this regard for individual vehicles on specific routes/drive cycles. However, for fleets of vehicles that must service a distribution of routes, DRL approaches struggle with learning stability issues that result in high variances and challenge their practical deployment. In this paper, we present a novel framework for shared learning among a fleet of vehicles through the use of a distilled group policy as the knowledge sharing mechanism for the policy learning computations at each vehicle. We detail the mathematical formulation that makes this possible. Several scenarios are considered to analyze the framework’s functionality, performance, and computational scalability with fleet size. Comparisons of the cumulative performance of fleets using our proposed shared learning approach with a baseline of individual learning agents and another state-of-the-art approach with a centralized learner show clear advantages to our approach. For example, we find a fleet average asymptotic improvement of 8.5% in fuel economy compared to the baseline while also improving on the metrics of acceleration error and shifting frequency for fleets serving a distribution of suburban routes. Furthermore, we include demonstrative results that show how the framework reduces variance within a fleet and also how it helps individual agents adapt better to new routes.

Suggested Citation

  • Kerbel, Lindsey & Ayalew, Beshah & Ivanco, Andrej, 2024. "Shared learning of powertrain control policies for vehicle fleets," Applied Energy, Elsevier, vol. 365(C).
  • Handle: RePEc:eee:appene:v:365:y:2024:i:c:s0306261924006007
    DOI: 10.1016/j.apenergy.2024.123217
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0306261924006007
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.apenergy.2024.123217?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to

    for a different version of it.

    References listed on IDEAS

    as
    1. Yang, Ningkang & Ruan, Shumin & Han, Lijin & Liu, Hui & Guo, Lingxiong & Xiang, Changle, 2023. "Reinforcement learning-based real-time intelligent energy management for hybrid electric vehicles in a model predictive control framework," Energy, Elsevier, vol. 270(C).
    2. Sun, Wenjing & Zou, Yuan & Zhang, Xudong & Guo, Ningyuan & Zhang, Bin & Du, Guodong, 2022. "High robustness energy management strategy of hybrid electric vehicle based on improved soft actor-critic deep reinforcement learning," Energy, Elsevier, vol. 258(C).
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Zhang, Yagang & Wang, Hui & Wang, Jingchao & Cheng, Xiaodan & Wang, Tong & Zhao, Zheng, 2024. "Ensemble optimization approach based on hybrid mode decomposition and intelligent technology for wind power prediction system," Energy, Elsevier, vol. 292(C).
    2. Umme Mumtahina & Sanath Alahakoon & Peter Wolfs, 2025. "A Day-Ahead Optimal Battery Scheduling Considering the Grid Stability of Distribution Feeders," Energies, MDPI, vol. 18(5), pages 1-20, February.
    3. Han, Lijin & You, Congwen & Yang, Ningkang & Liu, Hui & Chen, Ke & Xiang, Changle, 2024. "Adaptive real-time energy management strategy using heuristic search for off-road hybrid electric vehicles," Energy, Elsevier, vol. 304(C).
    4. Yang, Ningkang & Han, Lijin & Bo, Lin & Liu, Baoshuai & Chen, Xiuqi & Liu, Hui & Xiang, Changle, 2023. "Real-time adaptive energy management for off-road hybrid electric vehicles based on decision-time planning," Energy, Elsevier, vol. 282(C).
    5. Tian, Weiyong & Zhang, Xiaohui & Zhou, Peng & Guo, Ruixue, 2025. "Review of energy management technologies for unmanned aerial vehicles powered by hydrogen fuel cell," Energy, Elsevier, vol. 323(C).
    6. Chen, Fujun & Wang, Bowen & Ni, Meng & Gong, Zhichao & Jiao, Kui, 2024. "Online energy management strategy for ammonia-hydrogen hybrid electric vehicles harnessing deep reinforcement learning," Energy, Elsevier, vol. 301(C).
    7. Xi, Lei & Shi, Yu & Quan, Yue & Liu, Zhihong, 2024. "Research on the multi-area cooperative control method for novel power systems," Energy, Elsevier, vol. 313(C).
    8. Zhang, Dongfang & Sun, Wei & Zou, Yuan & Zhang, Xudong, 2025. "Energy management in HDHEV with dual APUs: Enhancing soft actor-critic using clustered experience replay and multi-dimensional priority sampling," Energy, Elsevier, vol. 319(C).
    9. Zhang, Dongfang & Sun, Wei & Zou, Yuan & Zhang, Xudong & Zhang, Yiwei, 2024. "An improved soft actor-critic-based energy management strategy of heavy-duty hybrid electric vehicles with dual-engine system," Energy, Elsevier, vol. 308(C).
    10. Kang, Hyuna & Jung, Seunghoon & Kim, Hakpyeong & Jeoung, Jaewon & Hong, Taehoon, 2024. "Reinforcement learning-based optimal scheduling model of battery energy storage system at the building level," Renewable and Sustainable Energy Reviews, Elsevier, vol. 190(PA).
    11. Zhang, Hao & Lei, Nuo & Chen, Boli & Li, Bingbing & Li, Rulong & Wang, Zhi, 2024. "Modeling and control system optimization for electrified vehicles: A data-driven approach," Energy, Elsevier, vol. 310(C).
    12. Liu, Zemin Eitan & Li, Yong & Zhou, Quan & Shuai, Bin & Hua, Min & Xu, Hongming & Xu, Lubing & Tan, Guikun & Li, Yanfei, 2025. "Real-time energy management for HEV combining naturalistic driving data and deep reinforcement learning with high generalization," Applied Energy, Elsevier, vol. 377(PA).
    13. Wang, Jinhai & Du, Changqing & Yan, Fuwu & Hua, Min & Gongye, Xiangyu & Yuan, Quan & Xu, Hongming & Zhou, Quan, 2025. "Bayesian optimization for hyper-parameter tuning of an improved twin delayed deep deterministic policy gradients based energy management strategy for plug-in hybrid electric vehicles," Applied Energy, Elsevier, vol. 381(C).
    14. Zhang, Yuxin & Yang, Yalian & Zou, Yunge & Liu, Changdong, 2024. "Design of optimal control strategy for range extended electric vehicles considering additional noise, vibration and harshness constraints," Energy, Elsevier, vol. 310(C).
    15. Ma, Xiaokang & Liu, Hui & Han, Lijin & Yang, Ningkang & Li, Mingyi, 2025. "An real-time intelligent energy management based on deep reinforcement learning and model predictive control for hybrid electric vehicles considering battery life," Energy, Elsevier, vol. 324(C).
    16. Lihong Dai & Peng Hu & Tianyou Wang & Guosheng Bian & Haoye Liu, 2024. "Optimal Rule-Interposing Reinforcement Learning-Based Energy Management of Series—Parallel-Connected Hybrid Electric Vehicles," Sustainability, MDPI, vol. 16(16), pages 1-17, August.
    17. Zhang, Chongbing & Ma, Yue & Li, Zhilin & Han, Lijin & Xiang, Changle & Wei, Zhengchao, 2024. "Fuel-economy-optimal power regulation for a twin-shaft turboshaft engine power generation unit based on high-pressure shaft power injection and variable shaft speed," Energy, Elsevier, vol. 309(C).
    18. Fan Wang & Yina Hong & Xiaohuan Zhao, 2025. "Research and Comparative Analysis of Energy Management Strategies for Hybrid Electric Vehicles: A Review," Energies, MDPI, vol. 18(11), pages 1-28, May.
    19. Huang, Xuejin & Zhang, Jingyi & Ou, Kai & Huang, Yin & Kang, Zehao & Mao, Xuping & Zhou, Yujie & Xuan, Dongji, 2024. "Deep reinforcement learning-based health-conscious energy management for fuel cell hybrid electric vehicles in model predictive control framework," Energy, Elsevier, vol. 304(C).
    20. Zhang, Hao & Lei, Nuo & Liu, Shang & Fan, Qinhao & Wang, Zhi, 2023. "Data-driven predictive energy consumption minimization strategy for connected plug-in hybrid electric vehicles," Energy, Elsevier, vol. 283(C).

    More about this item

    Keywords

    ;
    ;
    ;
    ;

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:365:y:2024:i:c:s0306261924006007. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.