Author
Listed:
- Kerbel, Lindsey
- Ayalew, Beshah
- Ivanco, Andrej
Abstract
Data-driven deep reinforcement learning (DRL)-based approaches have shown significant potential for improving the performance of vehicle control systems, in terms of energy consumption and other metrics, by allowing adaptation to the environments in which the vehicles are deployed. However, training DRL policies that work well in highly dynamic real-world environments is challenged by data efficiency and learning stability issues accompanied by high variances in performance. In this paper, we propose a novel cooperative learning approach to improve learning performance and reduce variances by continuously sharing experiences among powertrain control agents for a fleet of vehicles. The key contribution is the concept of a dynamic ad hoc teaming mechanism for decentralized and scalable mutual knowledge distillation between vehicles serving a distribution of routes. Our approach enables an asynchronous implementation that can operate whenever connectivity is available, thus removing a constraint for practical adoption. We compare two variants of the proposed framework with two other state-of-the-art alternatives in three scenarios that represent various deployments for a fleet. We find that the proposed framework significantly accelerates learning by reducing variances and improves long-term fleet mean total cycle rewards by up to 14 % compared to a baseline of individually learning agents. This improvement is on the same order as that achieved with centralized shared learning approaches, but without suffering their limitations of computational complexity and poor scalability. We also find that the proposed shared learning approach improves the adaptability of vehicle control agents to unfamiliar routes.
Suggested Citation
Kerbel, Lindsey & Ayalew, Beshah & Ivanco, Andrej, 2025.
"Dynamic ad hoc teaming and mutual distillation for cooperative learning of powertrain control policies for vehicle fleets,"
Applied Energy, Elsevier, vol. 399(C).
Handle:
RePEc:eee:appene:v:399:y:2025:i:c:s0306261925012255
DOI: 10.1016/j.apenergy.2025.126495
Download full text from publisher
As the access to this document is restricted, you may want to
for a different version of it.
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:399:y:2025:i:c:s0306261925012255. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.