IDEAS home Printed from https://ideas.repec.org/a/eee/appene/v365y2024ics0306261924006007.html
   My bibliography  Save this article

Shared learning of powertrain control policies for vehicle fleets

Author

Listed:
  • Kerbel, Lindsey
  • Ayalew, Beshah
  • Ivanco, Andrej

Abstract

Emerging data-driven approaches, such as deep reinforcement learning (DRL), aim at on-the-field learning of powertrain control policies that optimize fuel economy and other performance metrics. Indeed, they have shown great potential in this regard for individual vehicles on specific routes/drive cycles. However, for fleets of vehicles that must service a distribution of routes, DRL approaches struggle with learning stability issues that result in high variances and challenge their practical deployment. In this paper, we present a novel framework for shared learning among a fleet of vehicles through the use of a distilled group policy as the knowledge sharing mechanism for the policy learning computations at each vehicle. We detail the mathematical formulation that makes this possible. Several scenarios are considered to analyze the framework’s functionality, performance, and computational scalability with fleet size. Comparisons of the cumulative performance of fleets using our proposed shared learning approach with a baseline of individual learning agents and another state-of-the-art approach with a centralized learner show clear advantages to our approach. For example, we find a fleet average asymptotic improvement of 8.5% in fuel economy compared to the baseline while also improving on the metrics of acceleration error and shifting frequency for fleets serving a distribution of suburban routes. Furthermore, we include demonstrative results that show how the framework reduces variance within a fleet and also how it helps individual agents adapt better to new routes.

Suggested Citation

  • Kerbel, Lindsey & Ayalew, Beshah & Ivanco, Andrej, 2024. "Shared learning of powertrain control policies for vehicle fleets," Applied Energy, Elsevier, vol. 365(C).
  • Handle: RePEc:eee:appene:v:365:y:2024:i:c:s0306261924006007
    DOI: 10.1016/j.apenergy.2024.123217
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0306261924006007
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.apenergy.2024.123217?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Yang, Ningkang & Ruan, Shumin & Han, Lijin & Liu, Hui & Guo, Lingxiong & Xiang, Changle, 2023. "Reinforcement learning-based real-time intelligent energy management for hybrid electric vehicles in a model predictive control framework," Energy, Elsevier, vol. 270(C).
    2. Sun, Wenjing & Zou, Yuan & Zhang, Xudong & Guo, Ningyuan & Zhang, Bin & Du, Guodong, 2022. "High robustness energy management strategy of hybrid electric vehicle based on improved soft actor-critic deep reinforcement learning," Energy, Elsevier, vol. 258(C).
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Umme Mumtahina & Sanath Alahakoon & Peter Wolfs, 2025. "A Day-Ahead Optimal Battery Scheduling Considering the Grid Stability of Distribution Feeders," Energies, MDPI, vol. 18(5), pages 1-20, February.
    2. Han, Lijin & You, Congwen & Yang, Ningkang & Liu, Hui & Chen, Ke & Xiang, Changle, 2024. "Adaptive real-time energy management strategy using heuristic search for off-road hybrid electric vehicles," Energy, Elsevier, vol. 304(C).
    3. Xi, Lei & Shi, Yu & Quan, Yue & Liu, Zhihong, 2024. "Research on the multi-area cooperative control method for novel power systems," Energy, Elsevier, vol. 313(C).
    4. Kang, Hyuna & Jung, Seunghoon & Kim, Hakpyeong & Jeoung, Jaewon & Hong, Taehoon, 2024. "Reinforcement learning-based optimal scheduling model of battery energy storage system at the building level," Renewable and Sustainable Energy Reviews, Elsevier, vol. 190(PA).
    5. Zhang, Hao & Lei, Nuo & Chen, Boli & Li, Bingbing & Li, Rulong & Wang, Zhi, 2024. "Modeling and control system optimization for electrified vehicles: A data-driven approach," Energy, Elsevier, vol. 310(C).
    6. Zhang, Yuxin & Yang, Yalian & Zou, Yunge & Liu, Changdong, 2024. "Design of optimal control strategy for range extended electric vehicles considering additional noise, vibration and harshness constraints," Energy, Elsevier, vol. 310(C).
    7. Zhang, Chongbing & Ma, Yue & Li, Zhilin & Han, Lijin & Xiang, Changle & Wei, Zhengchao, 2024. "Fuel-economy-optimal power regulation for a twin-shaft turboshaft engine power generation unit based on high-pressure shaft power injection and variable shaft speed," Energy, Elsevier, vol. 309(C).
    8. Huang, Xuejin & Zhang, Jingyi & Ou, Kai & Huang, Yin & Kang, Zehao & Mao, Xuping & Zhou, Yujie & Xuan, Dongji, 2024. "Deep reinforcement learning-based health-conscious energy management for fuel cell hybrid electric vehicles in model predictive control framework," Energy, Elsevier, vol. 304(C).
    9. Zhang, Hao & Lei, Nuo & Liu, Shang & Fan, Qinhao & Wang, Zhi, 2023. "Data-driven predictive energy consumption minimization strategy for connected plug-in hybrid electric vehicles," Energy, Elsevier, vol. 283(C).
    10. Gao, Qinxiang & Lei, Tao & Yao, Wenli & Zhang, Xingyu & Zhang, Xiaobin, 2023. "A health-aware energy management strategy for fuel cell hybrid electric UAVs based on safe reinforcement learning," Energy, Elsevier, vol. 283(C).
    11. Mudhafar Al-Saadi & Maher Al-Greer & Michael Short, 2023. "Reinforcement Learning-Based Intelligent Control Strategies for Optimal Power Management in Advanced Power Distribution Systems: A Survey," Energies, MDPI, vol. 16(4), pages 1-38, February.
    12. Xu Wang & Ying Huang & Jian Wang, 2023. "Study on Driver-Oriented Energy Management Strategy for Hybrid Heavy-Duty Off-Road Vehicles under Aggressive Transient Operating Condition," Sustainability, MDPI, vol. 15(9), pages 1-25, May.
    13. Chang, Chengcheng & Zhao, Wanzhong & Wang, Chunyan & Luan, Zhongkai, 2023. "An energy management strategy of deep reinforcement learning based on multi-agent architecture under self-generating conditions," Energy, Elsevier, vol. 283(C).
    14. Wang, Hanchen & Arjmandzadeh, Ziba & Ye, Yiming & Zhang, Jiangfeng & Xu, Bin, 2024. "FlexNet: A warm start method for deep reinforcement learning in hybrid electric vehicle energy management applications," Energy, Elsevier, vol. 288(C).
    15. Lee, Junhyeok & Shin, Youngchul & Moon, Ilkyeong, 2024. "A hybrid deep reinforcement learning approach for a proactive transshipment of fresh food in the online–offline channel system," Transportation Research Part E: Logistics and Transportation Review, Elsevier, vol. 187(C).
    16. Zhang, Yagang & Wang, Hui & Wang, Jingchao & Cheng, Xiaodan & Wang, Tong & Zhao, Zheng, 2024. "Ensemble optimization approach based on hybrid mode decomposition and intelligent technology for wind power prediction system," Energy, Elsevier, vol. 292(C).
    17. Yang, Ningkang & Han, Lijin & Bo, Lin & Liu, Baoshuai & Chen, Xiuqi & Liu, Hui & Xiang, Changle, 2023. "Real-time adaptive energy management for off-road hybrid electric vehicles based on decision-time planning," Energy, Elsevier, vol. 282(C).
    18. Chen, Fujun & Wang, Bowen & Ni, Meng & Gong, Zhichao & Jiao, Kui, 2024. "Online energy management strategy for ammonia-hydrogen hybrid electric vehicles harnessing deep reinforcement learning," Energy, Elsevier, vol. 301(C).
    19. Zhang, Dongfang & Sun, Wei & Zou, Yuan & Zhang, Xudong & Zhang, Yiwei, 2024. "An improved soft actor-critic-based energy management strategy of heavy-duty hybrid electric vehicles with dual-engine system," Energy, Elsevier, vol. 308(C).
    20. Liu, Zemin Eitan & Li, Yong & Zhou, Quan & Shuai, Bin & Hua, Min & Xu, Hongming & Xu, Lubing & Tan, Guikun & Li, Yanfei, 2025. "Real-time energy management for HEV combining naturalistic driving data and deep reinforcement learning with high generalization," Applied Energy, Elsevier, vol. 377(PA).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:365:y:2024:i:c:s0306261924006007. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.