Enhanced applicability of reinforcement learning-based energy management by pivotal state-based Markov trajectories
Author
Abstract
Suggested Citation
DOI: 10.1016/j.energy.2025.135115
Download full text from publisher
As the access to this document is restricted, you may want to
for a different version of it.References listed on IDEAS
- Li, Zhenhe & Khajepour, Amir & Song, Jinchun, 2019. "A comprehensive review of the key technologies for pure electric vehicles," Energy, Elsevier, vol. 182(C), pages 824-839.
- Peng, Jiankun & He, Hongwen & Xiong, Rui, 2017. "Rule based energy management strategy for a series–parallel plug-in hybrid electric bus optimized by dynamic programming," Applied Energy, Elsevier, vol. 185(P2), pages 1633-1643.
- Chen, Jiaxin & Tang, Xiaolin & Yang, Kai, 2024. "A unified benchmark for deep reinforcement learning-based energy management: Novel training ideas with the unweighted reward," Energy, Elsevier, vol. 307(C).
- Wang, Hanchen & Ye, Yiming & Zhang, Jiangfeng & Xu, Bin, 2023. "A comparative study of 13 deep reinforcement learning based energy management methods for a hybrid electric vehicle," Energy, Elsevier, vol. 266(C).
- Henry X. Liu & Shuo Feng, 2024. "Curse of rarity for autonomous vehicles," Nature Communications, Nature, vol. 15(1), pages 1-5, December.
- David Silver & Julian Schrittwieser & Karen Simonyan & Ioannis Antonoglou & Aja Huang & Arthur Guez & Thomas Hubert & Lucas Baker & Matthew Lai & Adrian Bolton & Yutian Chen & Timothy Lillicrap & Fan , 2017. "Mastering the game of Go without human knowledge," Nature, Nature, vol. 550(7676), pages 354-359, October.
- Ganesh, Akhil Hannegudda & Xu, Bin, 2022. "A review of reinforcement learning based energy management systems for electrified powertrains: Progress, challenge, and potential solution," Renewable and Sustainable Energy Reviews, Elsevier, vol. 154(C).
- Qiu, Dawei & Wang, Yi & Hua, Weiqi & Strbac, Goran, 2023. "Reinforcement learning for electric vehicle applications in power systems:A critical review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 173(C).
- Elia Kaufmann & Leonard Bauersfeld & Antonio Loquercio & Matthias Müller & Vladlen Koltun & Davide Scaramuzza, 2023. "Champion-level drone racing using deep reinforcement learning," Nature, Nature, vol. 620(7976), pages 982-987, August.
- Xu, Bin & Rathod, Dhruvang & Zhang, Darui & Yebi, Adamu & Zhang, Xueyu & Li, Xiaoya & Filipi, Zoran, 2020. "Parametric study on reinforcement learning optimized energy management strategy for a hybrid electric vehicle," Applied Energy, Elsevier, vol. 259(C).
- Oriol Vinyals & Igor Babuschkin & Wojciech M. Czarnecki & Michaël Mathieu & Andrew Dudzik & Junyoung Chung & David H. Choi & Richard Powell & Timo Ewalds & Petko Georgiev & Junhyuk Oh & Dan Horgan & M, 2019. "Grandmaster level in StarCraft II using multi-agent reinforcement learning," Nature, Nature, vol. 575(7782), pages 350-354, November.
- Zhang, Yang & Li, Qingxin & Wen, Chengqing & Liu, Mingming & Yang, Xinhua & Xu, Hongming & Li, Ji, 2024. "Predictive equivalent consumption minimization strategy based on driving pattern personalized reconstruction," Applied Energy, Elsevier, vol. 367(C).
- Peter R. Wurman & Samuel Barrett & Kenta Kawamoto & James MacGlashan & Kaushik Subramanian & Thomas J. Walsh & Roberto Capobianco & Alisa Devlic & Franziska Eckert & Florian Fuchs & Leilani Gilpin & P, 2022. "Outracing champion Gran Turismo drivers with deep reinforcement learning," Nature, Nature, vol. 602(7896), pages 223-228, February.
- Liu, Zongwei & Hao, Han & Cheng, Xiang & Zhao, Fuquan, 2018. "Critical issues of energy efficient and new energy vehicles development in China," Energy Policy, Elsevier, vol. 115(C), pages 92-97.
Most related items
These are the items that most often cite the same works as this one and are cited by the same works as this one.- Chen, Jiaxin & Tang, Xiaolin & Yang, Kai, 2024. "A unified benchmark for deep reinforcement learning-based energy management: Novel training ideas with the unweighted reward," Energy, Elsevier, vol. 307(C).
- Dong, Peng & Zhao, Junwei & Liu, Xuewu & Wu, Jian & Xu, Xiangyang & Liu, Yanfang & Wang, Shuhan & Guo, Wei, 2022. "Practical application of energy management strategy for hybrid electric vehicles based on intelligent and connected technologies: Development stages, challenges, and future trends," Renewable and Sustainable Energy Reviews, Elsevier, vol. 170(C).
- Jinming Xu & Yuan Lin, 2024. "Energy Management for Hybrid Electric Vehicles Using Safe Hybrid-Action Reinforcement Learning," Mathematics, MDPI, vol. 12(5), pages 1-20, February.
- Hu, Dong & Xie, Hui & Song, Kang & Zhang, Yuanyuan & Yan, Long, 2023. "An apprenticeship-reinforcement learning scheme based on expert demonstrations for energy management strategy of hybrid electric vehicles," Applied Energy, Elsevier, vol. 342(C).
- Alessia Musa & Pier Giuseppe Anselma & Giovanni Belingardi & Daniela Anna Misul, 2023. "Energy Management in Hybrid Electric Vehicles: A Q-Learning Solution for Enhanced Drivability and Energy Efficiency," Energies, MDPI, vol. 17(1), pages 1-20, December.
- Feng, Zhiyan & Zhang, Qingang & Zhang, Yiming & Fei, Liangyu & Jiang, Fei & Zhao, Shengdun, 2024. "Practicability analysis of online deep reinforcement learning towards energy management strategy of 4WD-BEVs driven by dual-motor in-wheel motors," Energy, Elsevier, vol. 290(C).
- Niu, Zegong & He, Hongwen, 2024. "A data-driven solution for intelligent power allocation of connected hybrid electric vehicles inspired by offline deep reinforcement learning in V2X scenario," Applied Energy, Elsevier, vol. 372(C).
- Li, Jianwei & Liu, Jie & Yang, Qingqing & Wang, Tianci & He, Hongwen & Wang, Hanxiao & Sun, Fengchun, 2025. "Reinforcement learning based energy management for fuel cell hybrid electric vehicles: A comprehensive review on decision process reformulation and strategy implementation," Renewable and Sustainable Energy Reviews, Elsevier, vol. 213(C).
- Matteo Acquarone & Claudio Maino & Daniela Misul & Ezio Spessa & Antonio Mastropietro & Luca Sorrentino & Enrico Busto, 2023. "Influence of the Reward Function on the Selection of Reinforcement Learning Agents for Hybrid Electric Vehicles Real-Time Control," Energies, MDPI, vol. 16(6), pages 1-22, March.
- Zhu, Tao & Wills, Richard G.A. & Lot, Roberto & Ruan, Haijun & Jiang, Zhihao, 2021. "Adaptive energy management of a battery-supercapacitor energy storage system for electric vehicles based on flexible perception and neural network fitting," Applied Energy, Elsevier, vol. 292(C).
- Song Chen & Jiaxu Liu & Pengkai Wang & Chao Xu & Shengze Cai & Jian Chu, 2024. "Accelerated optimization in deep learning with a proportional-integral-derivative controller," Nature Communications, Nature, vol. 15(1), pages 1-16, December.
- Anselma, Pier Giuseppe, 2022. "Computationally efficient evaluation of fuel and electrical energy economy of plug-in hybrid electric vehicles with smooth driving constraints," Applied Energy, Elsevier, vol. 307(C).
- Daniel Egan & Qilun Zhu & Robert Prucka, 2023. "A Review of Reinforcement Learning-Based Powertrain Controllers: Effects of Agent Selection for Mixed-Continuity Control and Reward Formulation," Energies, MDPI, vol. 16(8), pages 1-31, April.
- Nweye, Kingsley & Sankaranarayanan, Siva & Nagy, Zoltan, 2023. "MERLIN: Multi-agent offline and transfer learning for occupant-centric operation of grid-interactive communities," Applied Energy, Elsevier, vol. 346(C).
- Christian Montaleza & Paul Arévalo & Jimmy Gallegos & Francisco Jurado, 2024. "Enhancing Energy Management Strategies for Extended-Range Electric Vehicles through Deep Q-Learning and Continuous State Representation," Energies, MDPI, vol. 17(2), pages 1-21, January.
- Yang, Zhengzhi & Zheng, Lei & Perc, Matjaž & Li, Yumeng, 2024. "Interaction state Q-learning promotes cooperation in the spatial prisoner's dilemma game," Applied Mathematics and Computation, Elsevier, vol. 463(C).
- Chen, Jiaxin & Shu, Hong & Tang, Xiaolin & Liu, Teng & Wang, Weida, 2022. "Deep reinforcement learning-based multi-objective control of hybrid power system combined with road recognition under time-varying environment," Energy, Elsevier, vol. 239(PC).
- Cheng, Shen & Zhao, Gaiju & Gao, Ming & Shi, Yuetao & Huang, Mingming & Yousefi, Nasser, 2021. "Optimal hybrid energy system for locomotive utilizing improved Locust Swarm optimizer," Energy, Elsevier, vol. 218(C).
- Shi, Dehua & Xu, Han & Wang, Shaohua & Hu, Jia & Chen, Long & Yin, Chunfang, 2024. "Deep reinforcement learning based adaptive energy management for plug-in hybrid electric vehicle with double deep Q-network," Energy, Elsevier, vol. 305(C).
- Raeid Saqur, 2024. "What Teaches Robots to Walk, Teaches Them to Trade too -- Regime Adaptive Execution using Informed Data and LLMs," Papers 2406.15508, arXiv.org.
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:energy:v:319:y:2025:i:c:s0360544225007571. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.journals.elsevier.com/energy .
Please note that corrections may take a couple of weeks to filter through the various RePEc services.