Shared learning of powertrain control policies for vehicle fleets
Author
Abstract
Suggested Citation
DOI: 10.1016/j.apenergy.2024.123217
Download full text from publisher
As the access to this document is restricted, you may want to search for a different version of it.
References listed on IDEAS
- Yang, Ningkang & Ruan, Shumin & Han, Lijin & Liu, Hui & Guo, Lingxiong & Xiang, Changle, 2023. "Reinforcement learning-based real-time intelligent energy management for hybrid electric vehicles in a model predictive control framework," Energy, Elsevier, vol. 270(C).
- Sun, Wenjing & Zou, Yuan & Zhang, Xudong & Guo, Ningyuan & Zhang, Bin & Du, Guodong, 2022. "High robustness energy management strategy of hybrid electric vehicle based on improved soft actor-critic deep reinforcement learning," Energy, Elsevier, vol. 258(C).
Most related items
These are the items that most often cite the same works as this one and are cited by the same works as this one.- Umme Mumtahina & Sanath Alahakoon & Peter Wolfs, 2025. "A Day-Ahead Optimal Battery Scheduling Considering the Grid Stability of Distribution Feeders," Energies, MDPI, vol. 18(5), pages 1-20, February.
- Han, Lijin & You, Congwen & Yang, Ningkang & Liu, Hui & Chen, Ke & Xiang, Changle, 2024. "Adaptive real-time energy management strategy using heuristic search for off-road hybrid electric vehicles," Energy, Elsevier, vol. 304(C).
- Xi, Lei & Shi, Yu & Quan, Yue & Liu, Zhihong, 2024. "Research on the multi-area cooperative control method for novel power systems," Energy, Elsevier, vol. 313(C).
- Kang, Hyuna & Jung, Seunghoon & Kim, Hakpyeong & Jeoung, Jaewon & Hong, Taehoon, 2024. "Reinforcement learning-based optimal scheduling model of battery energy storage system at the building level," Renewable and Sustainable Energy Reviews, Elsevier, vol. 190(PA).
- Zhang, Hao & Lei, Nuo & Chen, Boli & Li, Bingbing & Li, Rulong & Wang, Zhi, 2024. "Modeling and control system optimization for electrified vehicles: A data-driven approach," Energy, Elsevier, vol. 310(C).
- Zhang, Yuxin & Yang, Yalian & Zou, Yunge & Liu, Changdong, 2024. "Design of optimal control strategy for range extended electric vehicles considering additional noise, vibration and harshness constraints," Energy, Elsevier, vol. 310(C).
- Zhang, Chongbing & Ma, Yue & Li, Zhilin & Han, Lijin & Xiang, Changle & Wei, Zhengchao, 2024. "Fuel-economy-optimal power regulation for a twin-shaft turboshaft engine power generation unit based on high-pressure shaft power injection and variable shaft speed," Energy, Elsevier, vol. 309(C).
- Huang, Xuejin & Zhang, Jingyi & Ou, Kai & Huang, Yin & Kang, Zehao & Mao, Xuping & Zhou, Yujie & Xuan, Dongji, 2024. "Deep reinforcement learning-based health-conscious energy management for fuel cell hybrid electric vehicles in model predictive control framework," Energy, Elsevier, vol. 304(C).
- Zhang, Hao & Lei, Nuo & Liu, Shang & Fan, Qinhao & Wang, Zhi, 2023. "Data-driven predictive energy consumption minimization strategy for connected plug-in hybrid electric vehicles," Energy, Elsevier, vol. 283(C).
- Gao, Qinxiang & Lei, Tao & Yao, Wenli & Zhang, Xingyu & Zhang, Xiaobin, 2023. "A health-aware energy management strategy for fuel cell hybrid electric UAVs based on safe reinforcement learning," Energy, Elsevier, vol. 283(C).
- Mudhafar Al-Saadi & Maher Al-Greer & Michael Short, 2023. "Reinforcement Learning-Based Intelligent Control Strategies for Optimal Power Management in Advanced Power Distribution Systems: A Survey," Energies, MDPI, vol. 16(4), pages 1-38, February.
- Xu Wang & Ying Huang & Jian Wang, 2023. "Study on Driver-Oriented Energy Management Strategy for Hybrid Heavy-Duty Off-Road Vehicles under Aggressive Transient Operating Condition," Sustainability, MDPI, vol. 15(9), pages 1-25, May.
- Chang, Chengcheng & Zhao, Wanzhong & Wang, Chunyan & Luan, Zhongkai, 2023. "An energy management strategy of deep reinforcement learning based on multi-agent architecture under self-generating conditions," Energy, Elsevier, vol. 283(C).
- Wang, Hanchen & Arjmandzadeh, Ziba & Ye, Yiming & Zhang, Jiangfeng & Xu, Bin, 2024. "FlexNet: A warm start method for deep reinforcement learning in hybrid electric vehicle energy management applications," Energy, Elsevier, vol. 288(C).
- Lee, Junhyeok & Shin, Youngchul & Moon, Ilkyeong, 2024. "A hybrid deep reinforcement learning approach for a proactive transshipment of fresh food in the online–offline channel system," Transportation Research Part E: Logistics and Transportation Review, Elsevier, vol. 187(C).
- Zhang, Yagang & Wang, Hui & Wang, Jingchao & Cheng, Xiaodan & Wang, Tong & Zhao, Zheng, 2024. "Ensemble optimization approach based on hybrid mode decomposition and intelligent technology for wind power prediction system," Energy, Elsevier, vol. 292(C).
- Yang, Ningkang & Han, Lijin & Bo, Lin & Liu, Baoshuai & Chen, Xiuqi & Liu, Hui & Xiang, Changle, 2023. "Real-time adaptive energy management for off-road hybrid electric vehicles based on decision-time planning," Energy, Elsevier, vol. 282(C).
- Chen, Fujun & Wang, Bowen & Ni, Meng & Gong, Zhichao & Jiao, Kui, 2024. "Online energy management strategy for ammonia-hydrogen hybrid electric vehicles harnessing deep reinforcement learning," Energy, Elsevier, vol. 301(C).
- Zhang, Dongfang & Sun, Wei & Zou, Yuan & Zhang, Xudong & Zhang, Yiwei, 2024. "An improved soft actor-critic-based energy management strategy of heavy-duty hybrid electric vehicles with dual-engine system," Energy, Elsevier, vol. 308(C).
- Liu, Zemin Eitan & Li, Yong & Zhou, Quan & Shuai, Bin & Hua, Min & Xu, Hongming & Xu, Lubing & Tan, Guikun & Li, Yanfei, 2025. "Real-time energy management for HEV combining naturalistic driving data and deep reinforcement learning with high generalization," Applied Energy, Elsevier, vol. 377(PA).
More about this item
Keywords
Reinforcement learning; Vehicle fleet; Shared learning; Powertrain control;All these keywords.
Statistics
Access and download statisticsCorrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:365:y:2024:i:c:s0306261924006007. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .
Please note that corrections may take a couple of weeks to filter through the various RePEc services.