IDEAS home Printed from https://ideas.repec.org/a/gam/jlogis/v5y2021i1p10-d495904.html
   My bibliography  Save this article

Rolling Cargo Management Using a Deep Reinforcement Learning Approach

Author

Listed:
  • Rachid Oucheikh

    (Department of Computer Science, Jönköping University, 553 18 Jönköping, Sweden)

  • Tuwe Löfström

    (Department of Computer Science, Jönköping University, 553 18 Jönköping, Sweden)

  • Ernst Ahlberg

    (Department of Pharmaceutical Biosciences, Uppsala University, 752 36 Uppsala, Sweden
    Stena Line, 413 27 Göteborg, Sweden)

  • Lars Carlsson

    (Stena Line, 413 27 Göteborg, Sweden
    Centre for Reliable Machine Learning, University of London, London WC1E 7HU, UK)

Abstract

Loading and unloading rolling cargo in roll-on/roll-off are important and very recurrent operations in maritime logistics. In this paper, we apply state-of-the-art deep reinforcement learning algorithms to automate these operations in a complex and real environment. The objective is to teach an autonomous tug master to manage rolling cargo and perform loading and unloading operations while avoiding collisions with static and dynamic obstacles along the way. The artificial intelligence agent, representing the tug master, is trained and evaluated in a challenging environment based on the Unity3D learning framework, called the ML-Agents, and using proximal policy optimization. The agent is equipped with sensors for obstacle detection and is provided with real-time feedback from the environment thanks to its own reward function, allowing it to dynamically adapt its policies and navigation strategy. The performance evaluation shows that by choosing appropriate hyperparameters, the agents can successfully learn all required operations including lane-following, obstacle avoidance, and rolling cargo placement. This study also demonstrates the potential of intelligent autonomous systems to improve the performance and service quality of maritime transport.

Suggested Citation

  • Rachid Oucheikh & Tuwe Löfström & Ernst Ahlberg & Lars Carlsson, 2021. "Rolling Cargo Management Using a Deep Reinforcement Learning Approach," Logistics, MDPI, vol. 5(1), pages 1-18, February.
  • Handle: RePEc:gam:jlogis:v:5:y:2021:i:1:p:10-:d:495904
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2305-6290/5/1/10/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2305-6290/5/1/10/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Chaemin Lee & Mun Keong Lee & Jae Young Shin, 2020. "Lashing Force Prediction Model with Multimodal Deep Learning and AutoML for Stowage Planning Automation in Containerships," Logistics, MDPI, vol. 5(1), pages 1-15, December.
    2. Alberto Camarero Orive & José Ignacio Parra Santiago & María Magdalena Esteban-Infantes Corral & Nicoletta González-Cancelas, 2020. "Strategic Analysis of the Automation of Container Port Terminals through BOT (Business Observation Tool)," Logistics, MDPI, vol. 4(1), pages 1-14, February.
    3. Fotuhi, Fateme & Huynh, Nathan & Vidal, Jose M. & Xie, Yuanchang, 2013. "Modeling yard crane operators as reinforcement learning agents," Research in Transportation Economics, Elsevier, vol. 42(1), pages 3-12.
    4. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    5. Ziaul Haque Munim & Mariia Dushenko & Veronica Jaramillo Jimenez & Mohammad Hassan Shakil & Marius Imset, 2020. "Big data and artificial intelligence in the maritime industry: a bibliometric review and future research directions," Maritime Policy & Management, Taylor & Francis Journals, vol. 47(5), pages 577-597, July.
    6. Zhang, Canrong & Guan, Hao & Yuan, Yifei & Chen, Weiwei & Wu, Tao, 2020. "Machine learning-driven algorithms for the container relocation problem," Transportation Research Part B: Methodological, Elsevier, vol. 139(C), pages 102-131.
    7. David Silver & Julian Schrittwieser & Karen Simonyan & Ioannis Antonoglou & Aja Huang & Arthur Guez & Thomas Hubert & Lucas Baker & Matthew Lai & Adrian Bolton & Yutian Chen & Timothy Lillicrap & Fan , 2017. "Mastering the game of Go without human knowledge," Nature, Nature, vol. 550(7676), pages 354-359, October.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Mehran Farzadmehr & Valentin Carlan & Thierry Vanelslander, 2023. "Contemporary challenges and AI solutions in port operations: applying Gale–Shapley algorithm to find best matches," Journal of Shipping and Trade, Springer, vol. 8(1), pages 1-44, December.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Filom, Siyavash & Amiri, Amir M. & Razavi, Saiedeh, 2022. "Applications of machine learning methods in port operations – A systematic literature review," Transportation Research Part E: Logistics and Transportation Review, Elsevier, vol. 161(C).
    2. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    3. Boute, Robert N. & Gijsbrechts, Joren & van Jaarsveld, Willem & Vanvuchelen, Nathalie, 2022. "Deep reinforcement learning for inventory control: A roadmap," European Journal of Operational Research, Elsevier, vol. 298(2), pages 401-412.
    4. Zhang, Yihao & Chai, Zhaojie & Lykotrafitis, George, 2021. "Deep reinforcement learning with a particle dynamics environment applied to emergency evacuation of a room with obstacles," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 571(C).
    5. Yang, Kaiyuan & Huang, Houjing & Vandans, Olafs & Murali, Adithya & Tian, Fujia & Yap, Roland H.C. & Dai, Liang, 2023. "Applying deep reinforcement learning to the HP model for protein structure prediction," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 609(C).
    6. Weifan Long & Taixian Hou & Xiaoyi Wei & Shichao Yan & Peng Zhai & Lihua Zhang, 2023. "A Survey on Population-Based Deep Reinforcement Learning," Mathematics, MDPI, vol. 11(10), pages 1-17, May.
    7. Yifeng Guo & Xingyu Fu & Yuyan Shi & Mingwen Liu, 2018. "Robust Log-Optimal Strategy with Reinforcement Learning," Papers 1805.00205, arXiv.org.
    8. Touzani, Samir & Prakash, Anand Krishnan & Wang, Zhe & Agarwal, Shreya & Pritoni, Marco & Kiran, Mariam & Brown, Richard & Granderson, Jessica, 2021. "Controlling distributed energy resources via deep reinforcement learning for load flexibility and energy efficiency," Applied Energy, Elsevier, vol. 304(C).
    9. Antonopoulos, Ioannis & Robu, Valentin & Couraud, Benoit & Kirli, Desen & Norbu, Sonam & Kiprakis, Aristides & Flynn, David & Elizondo-Gonzalez, Sergio & Wattam, Steve, 2020. "Artificial intelligence and machine learning approaches to energy demand-side response: A systematic review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 130(C).
    10. O’Malley, Cormac & de Mars, Patrick & Badesa, Luis & Strbac, Goran, 2023. "Reinforcement learning and mixed-integer programming for power plant scheduling in low carbon systems: Comparison and hybridisation," Applied Energy, Elsevier, vol. 349(C).
    11. Yuchao Dong, 2022. "Randomized Optimal Stopping Problem in Continuous time and Reinforcement Learning Algorithm," Papers 2208.02409, arXiv.org, revised Sep 2023.
    12. Shijun Wang & Baocheng Zhu & Chen Li & Mingzhe Wu & James Zhang & Wei Chu & Yuan Qi, 2020. "Riemannian Proximal Policy Optimization," Computer and Information Science, Canadian Center of Science and Education, vol. 13(3), pages 1-93, August.
    13. Xuan-Kun Li & Jian-Xu Ma & Xiang-Yu Li & Jun-Jie Hu & Chuan-Yang Ding & Feng-Kai Han & Xiao-Min Guo & Xi Tan & Xian-Min Jin, 2024. "High-efficiency reinforcement learning with hybrid architecture photonic integrated circuit," Nature Communications, Nature, vol. 15(1), pages 1-10, December.
    14. Iwao Maeda & David deGraw & Michiharu Kitano & Hiroyasu Matsushima & Hiroki Sakaji & Kiyoshi Izumi & Atsuo Kato, 2020. "Deep Reinforcement Learning in Agent Based Financial Market Simulation," JRFM, MDPI, vol. 13(4), pages 1-17, April.
    15. Shohei Ohsawa, 2021. "Truthful Self-Play," Papers 2106.03007, arXiv.org, revised Feb 2023.
    16. Li, Wenqing & Ni, Shaoquan, 2022. "Train timetabling with the general learning environment and multi-agent deep reinforcement learning," Transportation Research Part B: Methodological, Elsevier, vol. 157(C), pages 230-251.
    17. Ayman Chaouki & Stephen Hardiman & Christian Schmidt & Emmanuel S'eri'e & Joachim de Lataillade, 2020. "Deep Deterministic Portfolio Optimization," Papers 2003.06497, arXiv.org, revised Apr 2020.
    18. Se-Hoon Jung & Jun-Ho Huh, 2019. "A Novel on Transmission Line Tower Big Data Analysis Model Using Altered K-means and ADQL," Sustainability, MDPI, vol. 11(13), pages 1-25, June.
    19. Bálint Kővári & Lászlo Szőke & Tamás Bécsi & Szilárd Aradi & Péter Gáspár, 2021. "Traffic Signal Control via Reinforcement Learning for Reducing Global Vehicle Emission," Sustainability, MDPI, vol. 13(20), pages 1-18, October.
    20. Bo Hu & Jiaxi Li & Shuang Li & Jie Yang, 2019. "A Hybrid End-to-End Control Strategy Combining Dueling Deep Q-network and PID for Transient Boost Control of a Diesel Engine with Variable Geometry Turbocharger and Cooled EGR," Energies, MDPI, vol. 12(19), pages 1-15, September.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jlogis:v:5:y:2021:i:1:p:10-:d:495904. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.