Author
Listed:
- Tong, He
- Chu, Liang
- Zhao, Di
- Hou, Zhuoran
- Guo, Zhiqi
Abstract
Energy management strategies (EMSs) are crucial for improving hybrid electric vehicles' (HEVs) efficiency. To shift to a more sustainable energy-saving paradigm, EMS must integrate factors like speed planning. Pulse-and-Glide (PnG) offers a promising speed planning method for fuel efficiency but struggles to balance comfort and fuel economy, limiting its adoption. Additionally, existing studies often oversimplify target speed profiles, restricting PnG's effectiveness in dynamic, real-world scenarios. To address these issues, this paper proposes PnG-Chaser, a novel deep reinforcement learning (DRL)-based framework that synergizes EMS and adaptive PnG speed planning in dynamic car-following contexts. PnG-Chaser utilizes a neural network controller, trained via the Rainbow algorithm, supported by carefully designed reward functions and optimized hyperparameters. This data-driven framework interacts with the environment to generate control signals that optimize speed and energy management while ensuring safety and comfort. Experimental results demonstrate that PnG-Chaser achieves 90.29 % of the fuel efficiency of a Dynamic Programming (DP)-based optimal EMS benchmark in training and 90.87 % in testing conditions. It also outperforms traditional Proportional-Integral-Derivative (PID) control in safety and adaptability, showcasing significant energy savings (particularly in urban environments), while maintaining comparable comfort level and real-time responsiveness. Moreover, PnG-Chaser is validated under 13 diverse driving cycles, underscoring its exceptional robustness. Testing on a real-world dataset further demonstrates its superior performance compared to other state-of-the-art DRL-based EMSs, and confirms its promising potential for practical deployment.
Suggested Citation
Tong, He & Chu, Liang & Zhao, Di & Hou, Zhuoran & Guo, Zhiqi, 2025.
"Sustainable energy-speed co-optimization for hybrid electric vehicles in dynamic car-following scenarios via multifunctional deep learning policy,"
Energy, Elsevier, vol. 334(C).
Handle:
RePEc:eee:energy:v:334:y:2025:i:c:s0360544225032645
DOI: 10.1016/j.energy.2025.137622
Download full text from publisher
As the access to this document is restricted, you may want to
for a different version of it.
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:energy:v:334:y:2025:i:c:s0360544225032645. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.journals.elsevier.com/energy .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.