Author
Listed:
- Song, Ge
- Xie, Hongbin
- Zhang, Jingyuan
- Fu, Hongdi
- Shi, Zhuoran
- Feng, Defan
- Song, Xuan
- Zhang, Haoran
Abstract
With the rapid adoption and increasing sales of electric vehicles, energy management for electric vehicle charging and maximizing the utilization of green energy have become increasingly critical. Existing studies have demonstrated that reinforcement learning plays a key role in enhancing power dispatch efficiency. However, in complex scenarios involving multiple stations and charging sites, significant challenges remain in leveraging mutual information to capture long-term temporal relationships and addressing the massive state and action spaces. To fill the research gaps in large-scale data, significant long-term temporal dependencies, and communication challenges in multi-station collaborative electric vehicle charging energy management, we propose a transformer-based multi-agent reinforcement learning algorithm, MAHEM. This algorithm leverages the popular transformer model to capture long-term temporal features in sequential data during distributed execution. By utilizing the transformer architecture, our method reduces the complexity of the action space through Q-value decomposition. Different agents effectively communicate through the attention mechanism, while the transformer model efficiently captures long-term temporal information, accelerating training and convergence by predicting future states. Experimental results show that, compared to existing baselines, our method reduces the total charging cost across stations by 31.6 % and achieves optimal performance across various environments, robustness tests, and transfer tests. This highlights the practicality and effectiveness of MAHEM in addressing the challenges of EV energy management systems.
Suggested Citation
Song, Ge & Xie, Hongbin & Zhang, Jingyuan & Fu, Hongdi & Shi, Zhuoran & Feng, Defan & Song, Xuan & Zhang, Haoran, 2025.
"Long-term efficient energy management for multi-station collaborative electric vehicle charging: A transformer-based multi-agent reinforcement learning approach,"
Applied Energy, Elsevier, vol. 397(C).
Handle:
RePEc:eee:appene:v:397:y:2025:i:c:s0306261925010451
DOI: 10.1016/j.apenergy.2025.126315
Download full text from publisher
As the access to this document is restricted, you may want to
for a different version of it.
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:397:y:2025:i:c:s0306261925010451. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.