Author
Listed:
- Yunhu Huang
- Wenzhu Lai
- Dewang Chen
- Geng Lin
- Jiateng Yin
Abstract
In recent decades, automatic train operation (ATO) systems have been gradually adopted by many metro systems, primarily due to their cost-effectiveness and practicality. However, a critical examination reveals computational constraints, adaptability to unforeseen conditions and multi-objective balancing that our research aims to address. In this paper, expert knowledge is combined with deep reinforcement learning algorithm (Proximal Policy Optimization, PPO) and two enhanced intelligent train operation algorithms (EITO) are proposed. The first algorithm, EITOE, is based on an expert system containing expert rules and a heuristic expert inference method. On the basis of EITOE, we propose EITOP algorithm using the PPO algorithm to optimize multiple objectives by designing reinforcement learning strategies, rewards, and value functions. We also develop the double minimal-time distribution (DMTD) calculation method in the EITO implementation to achieve longer coasting distances and further optimize the energy consumption. Compared with previous works, EITO enables the control of continuous train operation without reference to offline speed profiles and optimizes several key performance indicators online. Finally, we conducted comparative tests of the manual driving, intelligent driving algorithm (ITOR, STON), and the algorithms proposed in this paper, EITO, using real line data from the Yizhuang Line of Beijing Metro (YLBS). The test results show that the EITO outperform the current intelligent driving algorithms and manual driving in terms of energy consumption and passengers’ comfort. In addition, we further validated the robustness of EITO by selecting some complex lines with speed limits, gradients and different running times for testing on the YLBS. Overall, the EITOP algorithm has the best performance.
Suggested Citation
Yunhu Huang & Wenzhu Lai & Dewang Chen & Geng Lin & Jiateng Yin, 2025.
"Enhanced intelligent train operation algorithms for metro train based on expert system and deep reinforcement learning,"
PLOS ONE, Public Library of Science, vol. 20(5), pages 1-27, May.
Handle:
RePEc:plo:pone00:0323478
DOI: 10.1371/journal.pone.0323478
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pone00:0323478. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: plosone (email available below). General contact details of provider: https://journals.plos.org/plosone/ .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.