Enhancing feeder bus service coverage with Multi-Agent Reinforcement Learning: A case study in Hong Kong
Author
Abstract
Suggested Citation
DOI: 10.1016/j.tre.2025.103997
Download full text from publisher
As the access to this document is restricted, you may want to search for a different version of it.
References listed on IDEAS
- Wang, Yi & Qiu, Dawei & He, Yinglong & Zhou, Quan & Strbac, Goran, 2023. "Multi-agent reinforcement learning for electric vehicle decarbonized routing and scheduling," Energy, Elsevier, vol. 284(C).
- Ali Arishi & Krishna Krishnan, 2023. "A multi-agent deep reinforcement learning approach for solving the multi-depot vehicle routing problem," Journal of Management Analytics, Taylor & Francis Journals, vol. 10(3), pages 493-515, July.
- Christina Iliopoulou & Konstantinos Kepaptsoglou & Eleni Vlahogianni, 2019. "Metaheuristics for the transit route network design problem: a review and comparative analysis," Public Transport, Springer, vol. 11(3), pages 487-521, October.
- Javier Duran & Lorena Pradenas & Victor Parada, 2019. "Transit network design with pollution minimization," Public Transport, Springer, vol. 11(1), pages 189-210, June.
- Sunhyung Yoo & Jinwoo Brian Lee & Hoon Han, 2023. "A Reinforcement Learning approach for bus network design and frequency setting optimisation," Public Transport, Springer, vol. 15(2), pages 503-534, June.
- Roca-Riu, Mireia & Estrada, Miquel & Trapote, César, 2012. "The design of interurban bus networks in city centers," Transportation Research Part A: Policy and Practice, Elsevier, vol. 46(8), pages 1153-1165.
- Oriol Vinyals & Igor Babuschkin & Wojciech M. Czarnecki & Michaël Mathieu & Andrew Dudzik & Junyoung Chung & David H. Choi & Richard Powell & Timo Ewalds & Petko Georgiev & Junhyuk Oh & Dan Horgan & M, 2019. "Grandmaster level in StarCraft II using multi-agent reinforcement learning," Nature, Nature, vol. 575(7782), pages 350-354, November.
- David Silver & Aja Huang & Chris J. Maddison & Arthur Guez & Laurent Sifre & George van den Driessche & Julian Schrittwieser & Ioannis Antonoglou & Veda Panneershelvam & Marc Lanctot & Sander Dieleman, 2016. "Mastering the game of Go with deep neural networks and tree search," Nature, Nature, vol. 529(7587), pages 484-489, January.
Citations
Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
Cited by:
- Wei Shen & Honglu Cao & Jiandong Zhao, 2025. "Modular Scheduling Optimization of Multi-Scenario Intelligent Connected Buses Under Reservation-Based Travel," Sustainability, MDPI, vol. 17(6), pages 1-25, March.
Most related items
These are the items that most often cite the same works as this one and are cited by the same works as this one.- Philipp Heyken Soares, 2021. "Zone-based public transport route optimisation in an urban network," Public Transport, Springer, vol. 13(1), pages 197-231, March.
- Mahmoudi, Reza & Saidi, Saeid & Emrouznejad, Ali, 2025. "Mathematical programming in public bus transit design and operations: Emerging technologies and sustainability – A review," Socio-Economic Planning Sciences, Elsevier, vol. 98(C).
- Weifan Long & Taixian Hou & Xiaoyi Wei & Shichao Yan & Peng Zhai & Lihua Zhang, 2023. "A Survey on Population-Based Deep Reinforcement Learning," Mathematics, MDPI, vol. 11(10), pages 1-17, May.
- Shuo Sun & Rundong Wang & Bo An, 2021. "Reinforcement Learning for Quantitative Trading," Papers 2109.13851, arXiv.org.
- Li, Wenqing & Ni, Shaoquan, 2022. "Train timetabling with the general learning environment and multi-agent deep reinforcement learning," Transportation Research Part B: Methodological, Elsevier, vol. 157(C), pages 230-251.
- János Kramár & Tom Eccles & Ian Gemp & Andrea Tacchetti & Kevin R. McKee & Mateusz Malinowski & Thore Graepel & Yoram Bachrach, 2022. "Negotiation and honesty in artificial intelligence methods for the board game of Diplomacy," Nature Communications, Nature, vol. 13(1), pages 1-15, December.
- Boian Lazov, 2023. "A Deep Reinforcement Learning Trader without Offline Training," Papers 2303.00356, arXiv.org.
- Sunhyung Yoo & Jinwoo Brian Lee & Hoon Han, 2023. "A Reinforcement Learning approach for bus network design and frequency setting optimisation," Public Transport, Springer, vol. 15(2), pages 503-534, June.
- Jin, Jiahuan & Cui, Tianxiang & Bai, Ruibin & Qu, Rong, 2024. "Container port truck dispatching optimization using Real2Sim based deep reinforcement learning," European Journal of Operational Research, Elsevier, vol. 315(1), pages 161-175.
- Christina Iliopoulou & Konstantinos Kepaptsoglou & Eleni Vlahogianni, 2019. "Metaheuristics for the transit route network design problem: a review and comparative analysis," Public Transport, Springer, vol. 11(3), pages 487-521, October.
- Cui, Tianxiang & Du, Nanjiang & Yang, Xiaoying & Ding, Shusheng, 2024. "Multi-period portfolio optimization using a deep reinforcement learning hyper-heuristic approach," Technological Forecasting and Social Change, Elsevier, vol. 198(C).
- Weichao Mao & Tamer Başar, 2023. "Provably Efficient Reinforcement Learning in Decentralized General-Sum Markov Games," Dynamic Games and Applications, Springer, vol. 13(1), pages 165-186, March.
- Christoph Graf & Viktor Zobernig & Johannes Schmidt & Claude Klöckl, 2024. "Computational Performance of Deep Reinforcement Learning to Find Nash Equilibria," Computational Economics, Springer;Society for Computational Economics, vol. 63(2), pages 529-576, February.
- Liu, Bokai & Wang, Yizheng & Rabczuk, Timon & Olofsson, Thomas & Lu, Weizhuo, 2024. "Multi-scale modeling in thermal conductivity of Polyurethane incorporated with Phase Change Materials using Physics-Informed Neural Networks," Renewable Energy, Elsevier, vol. 220(C).
- Zhang, Qin & Liu, Yu & Xiang, Yisha & Xiahou, Tangfan, 2024. "Reinforcement learning in reliability and maintenance optimization: A tutorial," Reliability Engineering and System Safety, Elsevier, vol. 251(C).
- Cervantes-Sanmiguel, K.I. & Chavez-Hernandez, M.V. & Ibarra-Rojas, O.J., 2023. "Analyzing the trade-off between minimizing travel times and reducing monetary costs for users in the transit network design," Transportation Research Part B: Methodological, Elsevier, vol. 173(C), pages 142-161.
- Wang, Xin & Liu, Shuo & Yu, Yifan & Yue, Shengzhi & Liu, Ying & Zhang, Fumin & Lin, Yuanshan, 2023. "Modeling collective motion for fish schooling via multi-agent reinforcement learning," Ecological Modelling, Elsevier, vol. 477(C).
- Li, Shiyao & Zhou, Yue & Wu, Jianzhong & Pan, Yiqun & Huang, Zhizhong & Zhou, Nan, 2025. "A digital twin of multiple energy hub systems with peer-to-peer energy sharing," Applied Energy, Elsevier, vol. 380(C).
- Tian Zhu & Merry H. Ma, 2022. "Deriving the Optimal Strategy for the Two Dice Pig Game via Reinforcement Learning," Stats, MDPI, vol. 5(3), pages 1-14, August.
- Xiaoyue Li & John M. Mulvey, 2023. "Optimal Portfolio Execution in a Regime-switching Market with Non-linear Impact Costs: Combining Dynamic Program and Neural Network," Papers 2306.08809, arXiv.org.
More about this item
Keywords
Transit Route Network Design Problem; Multi-Agent Reinforcement Learning;Statistics
Access and download statisticsCorrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:transe:v:196:y:2025:i:c:s1366554525000389. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/600244/description#description .
Please note that corrections may take a couple of weeks to filter through the various RePEc services.