IDEAS home Printed from https://ideas.repec.org/a/gam/jmathe/v10y2022i23p4575-d992042.html
   My bibliography  Save this article

Multi-AGV Dynamic Scheduling in an Automated Container Terminal: A Deep Reinforcement Learning Approach

Author

Listed:
  • Xiyan Zheng

    (Institute of Logistics Science and Engineering, Shanghai Maritime University, Shanghai 201306, China)

  • Chengji Liang

    (Institute of Logistics Science and Engineering, Shanghai Maritime University, Shanghai 201306, China)

  • Yu Wang

    (Institute of Logistics Science and Engineering, Shanghai Maritime University, Shanghai 201306, China)

  • Jian Shi

    (Department of Engineering Technology, University of Houston, Houston, TX 77004, USA)

  • Gino Lim

    (Department of Industrial Engineering, University of Houston, Houston, TX 77004, USA)

Abstract

With the rapid development of global trade, ports and terminals are playing an increasingly important role, and automatic guided vehicles (AGVs) have been used as the main carriers performing the loading/unloading operations in automated container terminals. In this paper, we investigate a multi-AGV dynamic scheduling problem to improve the terminal operational efficiency, considering the sophisticated complexity and uncertainty involved in the port terminal operation. We propose to model the dynamic scheduling of AGVs as a Markov decision process (MDP) with mixed decision rules. Then, we develop a novel adaptive learning algorithm based on a deep Q-network (DQN) to generate the optimal policy. The proposed algorithm is trained based on data obtained from interactions with a simulation environment that reflects the real-world operation of an automated in Shanghai, China. The simulation studies show that, compared with conventional scheduling methods using a heuristic algorithm, i.e., genetic algorithm (GA) and rule-based scheduling, terminal the proposed approach performs better in terms of effectiveness and efficiency.

Suggested Citation

  • Xiyan Zheng & Chengji Liang & Yu Wang & Jian Shi & Gino Lim, 2022. "Multi-AGV Dynamic Scheduling in an Automated Container Terminal: A Deep Reinforcement Learning Approach," Mathematics, MDPI, vol. 10(23), pages 1-19, December.
  • Handle: RePEc:gam:jmathe:v:10:y:2022:i:23:p:4575-:d:992042
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2227-7390/10/23/4575/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2227-7390/10/23/4575/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Angeloudis, Panagiotis & Bell, Michael G.H., 2010. "An uncertainty-aware AGV assignment algorithm for automated container terminals," Transportation Research Part E: Logistics and Transportation Review, Elsevier, vol. 46(3), pages 354-366, May.
    2. Fotuhi, Fateme & Huynh, Nathan & Vidal, Jose M. & Xie, Yuanchang, 2013. "Modeling yard crane operators as reinforcement learning agents," Research in Transportation Economics, Elsevier, vol. 42(1), pages 3-12.
    3. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    4. Han, Xuefeng & He, Hongwen & Wu, Jingda & Peng, Jiankun & Li, Yuecheng, 2019. "Energy management based on reinforcement learning with double deep Q-learning for a hybrid electric tracked vehicle," Applied Energy, Elsevier, vol. 254(C).
    5. Yong Wu & Wenkai Li & Matthew E. H. Petering & Mark Goh & Robert de Souza, 2015. "Scheduling Multiple Yard Cranes with Crane Interference and Safety Distance Requirement," Transportation Science, INFORMS, vol. 49(4), pages 990-1005, November.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Daniel Egan & Qilun Zhu & Robert Prucka, 2023. "A Review of Reinforcement Learning-Based Powertrain Controllers: Effects of Agent Selection for Mixed-Continuity Control and Reward Formulation," Energies, MDPI, vol. 16(8), pages 1-31, April.
    2. Wang, Xuan & Wang, Rui & Jin, Ming & Shu, Gequn & Tian, Hua & Pan, Jiaying, 2020. "Control of superheat of organic Rankine cycle under transient heat source based on deep reinforcement learning," Applied Energy, Elsevier, vol. 278(C).
    3. Du, Yan & Zandi, Helia & Kotevska, Olivera & Kurte, Kuldeep & Munk, Jeffery & Amasyali, Kadir & Mckee, Evan & Li, Fangxing, 2021. "Intelligent multi-zone residential HVAC control strategy based on deep reinforcement learning," Applied Energy, Elsevier, vol. 281(C).
    4. Kunyu Wang & Rong Yang & Yongjian Zhou & Wei Huang & Song Zhang, 2022. "Design and Improvement of SD3-Based Energy Management Strategy for a Hybrid Electric Urban Bus," Energies, MDPI, vol. 15(16), pages 1-21, August.
    5. Xu, Bin & Rathod, Dhruvang & Zhang, Darui & Yebi, Adamu & Zhang, Xueyu & Li, Xiaoya & Filipi, Zoran, 2020. "Parametric study on reinforcement learning optimized energy management strategy for a hybrid electric vehicle," Applied Energy, Elsevier, vol. 259(C).
    6. Sun, Alexander Y., 2020. "Optimal carbon storage reservoir management through deep reinforcement learning," Applied Energy, Elsevier, vol. 278(C).
    7. Yan, Yimo & Chow, Andy H.F. & Ho, Chin Pang & Kuo, Yong-Hong & Wu, Qihao & Ying, Chengshuo, 2022. "Reinforcement learning for logistics and supply chain management: Methodologies, state of the art, and future opportunities," Transportation Research Part E: Logistics and Transportation Review, Elsevier, vol. 162(C).
    8. Rachid Oucheikh & Tuwe Löfström & Ernst Ahlberg & Lars Carlsson, 2021. "Rolling Cargo Management Using a Deep Reinforcement Learning Approach," Logistics, MDPI, vol. 5(1), pages 1-18, February.
    9. Tulika Saha & Sriparna Saha & Pushpak Bhattacharyya, 2020. "Towards sentiment aided dialogue policy learning for multi-intent conversations using hierarchical reinforcement learning," PLOS ONE, Public Library of Science, vol. 15(7), pages 1-28, July.
    10. Mahmoud Mahfouz & Angelos Filos & Cyrine Chtourou & Joshua Lockhart & Samuel Assefa & Manuela Veloso & Danilo Mandic & Tucker Balch, 2019. "On the Importance of Opponent Modeling in Auction Markets," Papers 1911.12816, arXiv.org.
    11. Yin, Linfei & Zhang, Bin, 2021. "Time series generative adversarial network controller for long-term smart generation control of microgrids," Applied Energy, Elsevier, vol. 281(C).
    12. Matteo Acquarone & Claudio Maino & Daniela Misul & Ezio Spessa & Antonio Mastropietro & Luca Sorrentino & Enrico Busto, 2023. "Influence of the Reward Function on the Selection of Reinforcement Learning Agents for Hybrid Electric Vehicles Real-Time Control," Energies, MDPI, vol. 16(6), pages 1-22, March.
    13. Woo Jae Byun & Bumkyu Choi & Seongmin Kim & Joohyun Jo, 2023. "Practical Application of Deep Reinforcement Learning to Optimal Trade Execution," FinTech, MDPI, vol. 2(3), pages 1-16, June.
    14. Lu, Yu & Xiang, Yue & Huang, Yuan & Yu, Bin & Weng, Liguo & Liu, Junyong, 2023. "Deep reinforcement learning based optimal scheduling of active distribution system considering distributed generation, energy storage and flexible load," Energy, Elsevier, vol. 271(C).
    15. Yuhong Wang & Lei Chen & Hong Zhou & Xu Zhou & Zongsheng Zheng & Qi Zeng & Li Jiang & Liang Lu, 2021. "Flexible Transmission Network Expansion Planning Based on DQN Algorithm," Energies, MDPI, vol. 14(7), pages 1-21, April.
    16. Michelle M. LaMar, 2018. "Markov Decision Process Measurement Model," Psychometrika, Springer;The Psychometric Society, vol. 83(1), pages 67-88, March.
    17. Yang, Ting & Zhao, Liyuan & Li, Wei & Zomaya, Albert Y., 2021. "Dynamic energy dispatch strategy for integrated energy system based on improved deep reinforcement learning," Energy, Elsevier, vol. 235(C).
    18. Zhang, Di & Chen, Feng & Mei, Ziqiao, 2023. "Optimization on joint scheduling of yard allocation and transfer manpower assignment for automobile RO-RO terminal," Transportation Research Part E: Logistics and Transportation Review, Elsevier, vol. 177(C).
    19. Wang, Yi & Qiu, Dawei & Sun, Mingyang & Strbac, Goran & Gao, Zhiwei, 2023. "Secure energy management of multi-energy microgrid: A physical-informed safe reinforcement learning approach," Applied Energy, Elsevier, vol. 335(C).
    20. Neha Soni & Enakshi Khular Sharma & Narotam Singh & Amita Kapoor, 2019. "Impact of Artificial Intelligence on Businesses: from Research, Innovation, Market Deployment to Future Shifts in Business Models," Papers 1905.02092, arXiv.org.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:10:y:2022:i:23:p:4575-:d:992042. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.