IDEAS home Printed from https://ideas.repec.org/a/eee/transe/v196y2025ics1366554525000389.html
   My bibliography  Save this article

Enhancing feeder bus service coverage with Multi-Agent Reinforcement Learning: A case study in Hong Kong

Author

Listed:
  • Su, Yang
  • Yang, Hai

Abstract

Public transport is a vital component of modern urban mobility, playing a significant role in reducing congestion and promoting environmental sustainability. Feeder bus services are essential for connecting residents to major public transport hubs, such as metro or rail stations. In this paper, a novel framework that enhances service coverage of the feeder bus while maintaining network efficiency is proposed. The framework integrates Multi-Agent Reinforcement Learning (MARL) to simulate and optimize route designs and frequency settings. Additionally, we introduce a Cost-based Competitive Coverage (CCC) Model to evaluate the performance of the feeder bus services by considering competition with other public transport modes. A case study conducted in two new towns in Hong Kong demonstrates the effectiveness and robustness of the proposed framework, highlighting its adaptability and potential to improve public transport accessibility.

Suggested Citation

  • Su, Yang & Yang, Hai, 2025. "Enhancing feeder bus service coverage with Multi-Agent Reinforcement Learning: A case study in Hong Kong," Transportation Research Part E: Logistics and Transportation Review, Elsevier, vol. 196(C).
  • Handle: RePEc:eee:transe:v:196:y:2025:i:c:s1366554525000389
    DOI: 10.1016/j.tre.2025.103997
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S1366554525000389
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.tre.2025.103997?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Wang, Yi & Qiu, Dawei & He, Yinglong & Zhou, Quan & Strbac, Goran, 2023. "Multi-agent reinforcement learning for electric vehicle decarbonized routing and scheduling," Energy, Elsevier, vol. 284(C).
    2. Ali Arishi & Krishna Krishnan, 2023. "A multi-agent deep reinforcement learning approach for solving the multi-depot vehicle routing problem," Journal of Management Analytics, Taylor & Francis Journals, vol. 10(3), pages 493-515, July.
    3. Christina Iliopoulou & Konstantinos Kepaptsoglou & Eleni Vlahogianni, 2019. "Metaheuristics for the transit route network design problem: a review and comparative analysis," Public Transport, Springer, vol. 11(3), pages 487-521, October.
    4. Javier Duran & Lorena Pradenas & Victor Parada, 2019. "Transit network design with pollution minimization," Public Transport, Springer, vol. 11(1), pages 189-210, June.
    5. Sunhyung Yoo & Jinwoo Brian Lee & Hoon Han, 2023. "A Reinforcement Learning approach for bus network design and frequency setting optimisation," Public Transport, Springer, vol. 15(2), pages 503-534, June.
    6. Roca-Riu, Mireia & Estrada, Miquel & Trapote, César, 2012. "The design of interurban bus networks in city centers," Transportation Research Part A: Policy and Practice, Elsevier, vol. 46(8), pages 1153-1165.
    7. Oriol Vinyals & Igor Babuschkin & Wojciech M. Czarnecki & Michaël Mathieu & Andrew Dudzik & Junyoung Chung & David H. Choi & Richard Powell & Timo Ewalds & Petko Georgiev & Junhyuk Oh & Dan Horgan & M, 2019. "Grandmaster level in StarCraft II using multi-agent reinforcement learning," Nature, Nature, vol. 575(7782), pages 350-354, November.
    8. David Silver & Aja Huang & Chris J. Maddison & Arthur Guez & Laurent Sifre & George van den Driessche & Julian Schrittwieser & Ioannis Antonoglou & Veda Panneershelvam & Marc Lanctot & Sander Dieleman, 2016. "Mastering the game of Go with deep neural networks and tree search," Nature, Nature, vol. 529(7587), pages 484-489, January.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Wei Shen & Honglu Cao & Jiandong Zhao, 2025. "Modular Scheduling Optimization of Multi-Scenario Intelligent Connected Buses Under Reservation-Based Travel," Sustainability, MDPI, vol. 17(6), pages 1-25, March.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Philipp Heyken Soares, 2021. "Zone-based public transport route optimisation in an urban network," Public Transport, Springer, vol. 13(1), pages 197-231, March.
    2. Mahmoudi, Reza & Saidi, Saeid & Emrouznejad, Ali, 2025. "Mathematical programming in public bus transit design and operations: Emerging technologies and sustainability – A review," Socio-Economic Planning Sciences, Elsevier, vol. 98(C).
    3. Weifan Long & Taixian Hou & Xiaoyi Wei & Shichao Yan & Peng Zhai & Lihua Zhang, 2023. "A Survey on Population-Based Deep Reinforcement Learning," Mathematics, MDPI, vol. 11(10), pages 1-17, May.
    4. Shuo Sun & Rundong Wang & Bo An, 2021. "Reinforcement Learning for Quantitative Trading," Papers 2109.13851, arXiv.org.
    5. Li, Wenqing & Ni, Shaoquan, 2022. "Train timetabling with the general learning environment and multi-agent deep reinforcement learning," Transportation Research Part B: Methodological, Elsevier, vol. 157(C), pages 230-251.
    6. János Kramár & Tom Eccles & Ian Gemp & Andrea Tacchetti & Kevin R. McKee & Mateusz Malinowski & Thore Graepel & Yoram Bachrach, 2022. "Negotiation and honesty in artificial intelligence methods for the board game of Diplomacy," Nature Communications, Nature, vol. 13(1), pages 1-15, December.
    7. Boian Lazov, 2023. "A Deep Reinforcement Learning Trader without Offline Training," Papers 2303.00356, arXiv.org.
    8. Sunhyung Yoo & Jinwoo Brian Lee & Hoon Han, 2023. "A Reinforcement Learning approach for bus network design and frequency setting optimisation," Public Transport, Springer, vol. 15(2), pages 503-534, June.
    9. Jin, Jiahuan & Cui, Tianxiang & Bai, Ruibin & Qu, Rong, 2024. "Container port truck dispatching optimization using Real2Sim based deep reinforcement learning," European Journal of Operational Research, Elsevier, vol. 315(1), pages 161-175.
    10. Christina Iliopoulou & Konstantinos Kepaptsoglou & Eleni Vlahogianni, 2019. "Metaheuristics for the transit route network design problem: a review and comparative analysis," Public Transport, Springer, vol. 11(3), pages 487-521, October.
    11. Cui, Tianxiang & Du, Nanjiang & Yang, Xiaoying & Ding, Shusheng, 2024. "Multi-period portfolio optimization using a deep reinforcement learning hyper-heuristic approach," Technological Forecasting and Social Change, Elsevier, vol. 198(C).
    12. Weichao Mao & Tamer Başar, 2023. "Provably Efficient Reinforcement Learning in Decentralized General-Sum Markov Games," Dynamic Games and Applications, Springer, vol. 13(1), pages 165-186, March.
    13. Christoph Graf & Viktor Zobernig & Johannes Schmidt & Claude Klöckl, 2024. "Computational Performance of Deep Reinforcement Learning to Find Nash Equilibria," Computational Economics, Springer;Society for Computational Economics, vol. 63(2), pages 529-576, February.
    14. Liu, Bokai & Wang, Yizheng & Rabczuk, Timon & Olofsson, Thomas & Lu, Weizhuo, 2024. "Multi-scale modeling in thermal conductivity of Polyurethane incorporated with Phase Change Materials using Physics-Informed Neural Networks," Renewable Energy, Elsevier, vol. 220(C).
    15. Zhang, Qin & Liu, Yu & Xiang, Yisha & Xiahou, Tangfan, 2024. "Reinforcement learning in reliability and maintenance optimization: A tutorial," Reliability Engineering and System Safety, Elsevier, vol. 251(C).
    16. Cervantes-Sanmiguel, K.I. & Chavez-Hernandez, M.V. & Ibarra-Rojas, O.J., 2023. "Analyzing the trade-off between minimizing travel times and reducing monetary costs for users in the transit network design," Transportation Research Part B: Methodological, Elsevier, vol. 173(C), pages 142-161.
    17. Wang, Xin & Liu, Shuo & Yu, Yifan & Yue, Shengzhi & Liu, Ying & Zhang, Fumin & Lin, Yuanshan, 2023. "Modeling collective motion for fish schooling via multi-agent reinforcement learning," Ecological Modelling, Elsevier, vol. 477(C).
    18. Li, Shiyao & Zhou, Yue & Wu, Jianzhong & Pan, Yiqun & Huang, Zhizhong & Zhou, Nan, 2025. "A digital twin of multiple energy hub systems with peer-to-peer energy sharing," Applied Energy, Elsevier, vol. 380(C).
    19. Tian Zhu & Merry H. Ma, 2022. "Deriving the Optimal Strategy for the Two Dice Pig Game via Reinforcement Learning," Stats, MDPI, vol. 5(3), pages 1-14, August.
    20. Xiaoyue Li & John M. Mulvey, 2023. "Optimal Portfolio Execution in a Regime-switching Market with Non-linear Impact Costs: Combining Dynamic Program and Neural Network," Papers 2306.08809, arXiv.org.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:transe:v:196:y:2025:i:c:s1366554525000389. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/600244/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.