IDEAS home Printed from https://ideas.repec.org/a/eee/ejores/v310y2023i3p1179-1191.html
   My bibliography  Save this article

Navigational guidance – A deep learning approach

Author

Listed:
  • Yen, Benjamin P.-C.
  • Luo, Yu

Abstract

This paper addresses the navigation problems facing many companies, including logistics companies, couriers, and Uber, helping users find the best route to multiple destinations in the shortest amount of time. Navigation problems involving multiple destinations are formulated in this paper as Directed Steiner Tree (DST) problems on directed graphs. We propose an end-to-end deep learning approach to tackle the DST problems in a supervised and non-autoregressive manner. The core of our approach is Graph Neural Networks (GNNs) in estimating whether a node belongs to the optimal solution. Experiments are conducted to evaluate the proposed approach, and the results suggest that using our approach can effectively solve the DST problems with at least 95.04% accuracy. Compared to solving DST problems using traditional methods, our approach significantly improves the solvability of DST problems with acceptable execution time. We further explore how our approach can be applied to different scenarios, such as large-scale graphs. Moreover, we show that our approach can be smoothly applied to solve several variants of the Steiner Tree problem, including Steiner Forest problems. In summary, the proposed approach shows promising results and can be implemented in practice. Research limitations and future directions are also discussed.

Suggested Citation

  • Yen, Benjamin P.-C. & Luo, Yu, 2023. "Navigational guidance – A deep learning approach," European Journal of Operational Research, Elsevier, vol. 310(3), pages 1179-1191.
  • Handle: RePEc:eee:ejores:v:310:y:2023:i:3:p:1179-1191
    DOI: 10.1016/j.ejor.2023.04.020
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0377221723003041
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.ejor.2023.04.020?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Gah-Yi Ban & Cynthia Rudin, 2019. "The Big Data Newsvendor: Practical Insights from Machine Learning," Operations Research, INFORMS, vol. 67(1), pages 90-108, January.
    2. De Moor, Bram J. & Gijsbrechts, Joren & Boute, Robert N., 2022. "Reward shaping to improve the performance of deep reinforcement learning in perishable inventory management," European Journal of Operational Research, Elsevier, vol. 301(2), pages 535-545.
    3. Kallestad, Jakob & Hasibi, Ramin & Hemmati, Ahmad & Sörensen, Kenneth, 2023. "A general deep reinforcement learning hyperheuristic framework for solving combinatorial optimization problems," European Journal of Operational Research, Elsevier, vol. 309(1), pages 446-468.
    4. Kraus, Mathias & Feuerriegel, Stefan & Oztekin, Asil, 2020. "Deep learning in business analytics and operations research: Models, applications and managerial implications," European Journal of Operational Research, Elsevier, vol. 281(3), pages 628-641.
    5. Sigrist, Fabio & Leuenberger, Nicola, 2023. "Machine learning for corporate default risk: Multi-period prediction, frailty correlation, loan portfolios, and tail probabilities," European Journal of Operational Research, Elsevier, vol. 305(3), pages 1390-1406.
    6. McHale, Ian G. & Holmes, Benjamin, 2023. "Estimating transfer fees of professional footballers using advanced performance metrics and machine learning," European Journal of Operational Research, Elsevier, vol. 306(1), pages 389-399.
    7. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    8. Dimitri Watel & Marc-Antoine Weisser, 2016. "A practical greedy approximation for the directed Steiner tree problem," Journal of Combinatorial Optimization, Springer, vol. 32(4), pages 1327-1370, November.
    9. Philipp Borchert & Kristof Coussement & Arno de Caigny & Jochen de Weerdt, 2023. "Extending business failure prediction models with textual website content using deep learning," Post-Print hal-03976762, HAL.
    10. Meng Qi & Yuanyuan Shi & Yongzhi Qi & Chenxin Ma & Rong Yuan & Di Wu & Zuo-Jun (Max) Shen, 2023. "A Practical End-to-End Inventory Management Model with Deep Learning," Management Science, INFORMS, vol. 69(2), pages 759-773, February.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Erkip, Nesim Kohen, 2023. "Can accessing much data reshape the theory? Inventory theory under the challenge of data-driven systems," European Journal of Operational Research, Elsevier, vol. 308(3), pages 949-959.
    2. Vairetti, Carla & Aránguiz, Ignacio & Maldonado, Sebastián & Karmy, Juan Pablo & Leal, Alonso, 2024. "Analytics-driven complaint prioritisation via deep learning and multicriteria decision-making," European Journal of Operational Research, Elsevier, vol. 312(3), pages 1108-1118.
    3. Koen W. de Bock & Kristof Coussement & Arno De Caigny & Roman Slowiński & Bart Baesens & Robert N Boute & Tsan-Ming Choi & Dursun Delen & Mathias Kraus & Stefan Lessmann & Sebastián Maldonado & David , 2023. "Explainable AI for Operational Research: A Defining Framework, Methods, Applications, and a Research Agenda," Post-Print hal-04219546, HAL.
    4. Pournader, Mehrdokht & Ghaderi, Hadi & Hassanzadegan, Amir & Fahimnia, Behnam, 2021. "Artificial intelligence applications in supply chain management," International Journal of Production Economics, Elsevier, vol. 241(C).
    5. Yi Wang & Yafei Yang & Zhaoxiang Qin & Yefei Yang & Jun Li, 2023. "A Literature Review on the Application of Digital Technology in Achieving Green Supply Chain Management," Sustainability, MDPI, vol. 15(11), pages 1-18, May.
    6. Tulika Saha & Sriparna Saha & Pushpak Bhattacharyya, 2020. "Towards sentiment aided dialogue policy learning for multi-intent conversations using hierarchical reinforcement learning," PLOS ONE, Public Library of Science, vol. 15(7), pages 1-28, July.
    7. Xi Chen & Zachary Owen & Clark Pixton & David Simchi-Levi, 2022. "A Statistical Learning Approach to Personalization in Revenue Management," Management Science, INFORMS, vol. 68(3), pages 1923-1937, March.
    8. Stefan Feuerriegel & Mateusz Dolata & Gerhard Schwabe, 2020. "Fair AI," Business & Information Systems Engineering: The International Journal of WIRTSCHAFTSINFORMATIK, Springer;Gesellschaft für Informatik e.V. (GI), vol. 62(4), pages 379-384, August.
    9. Mahmoud Mahfouz & Angelos Filos & Cyrine Chtourou & Joshua Lockhart & Samuel Assefa & Manuela Veloso & Danilo Mandic & Tucker Balch, 2019. "On the Importance of Opponent Modeling in Auction Markets," Papers 1911.12816, arXiv.org.
    10. Imen Azzouz & Wiem Fekih Hassen, 2023. "Optimization of Electric Vehicles Charging Scheduling Based on Deep Reinforcement Learning: A Decentralized Approach," Energies, MDPI, vol. 16(24), pages 1-18, December.
    11. Gahm, Christian & Uzunoglu, Aykut & Wahl, Stefan & Ganschinietz, Chantal & Tuma, Axel, 2022. "Applying machine learning for the anticipation of complex nesting solutions in hierarchical production planning," European Journal of Operational Research, Elsevier, vol. 296(3), pages 819-836.
    12. Jacob W. Crandall & Mayada Oudah & Tennom & Fatimah Ishowo-Oloko & Sherief Abdallah & Jean-François Bonnefon & Manuel Cebrian & Azim Shariff & Michael A. Goodrich & Iyad Rahwan, 2018. "Cooperating with machines," Nature Communications, Nature, vol. 9(1), pages 1-12, December.
      • Abdallah, Sherief & Bonnefon, Jean-François & Cebrian, Manuel & Crandall, Jacob W. & Ishowo-Oloko, Fatimah & Oudah, Mayada & Rahwan, Iyad & Shariff, Azim & Tennom,, 2017. "Cooperating with Machines," TSE Working Papers 17-806, Toulouse School of Economics (TSE).
      • Abdallah, Sherief & Bonnefon, Jean-François & Cebrian, Manuel & Crandall, Jacob W. & Ishowo-Oloko, Fatimah & Oudah, Mayada & Rahwan, Iyad & Shariff, Azim & Tennom,, 2017. "Cooperating with Machines," IAST Working Papers 17-68, Institute for Advanced Study in Toulouse (IAST).
      • Jacob Crandall & Mayada Oudah & Fatimah Ishowo-Oloko Tennom & Fatimah Ishowo-Oloko & Sherief Abdallah & Jean-François Bonnefon & Manuel Cebrian & Azim Shariff & Michael Goodrich & Iyad Rahwan, 2018. "Cooperating with machines," Post-Print hal-01897802, HAL.
    13. Sun, Alexander Y., 2020. "Optimal carbon storage reservoir management through deep reinforcement learning," Applied Energy, Elsevier, vol. 278(C).
    14. Yassine Chemingui & Adel Gastli & Omar Ellabban, 2020. "Reinforcement Learning-Based School Energy Management System," Energies, MDPI, vol. 13(23), pages 1-21, December.
    15. Woo Jae Byun & Bumkyu Choi & Seongmin Kim & Joohyun Jo, 2023. "Practical Application of Deep Reinforcement Learning to Optimal Trade Execution," FinTech, MDPI, vol. 2(3), pages 1-16, June.
    16. Lu, Yu & Xiang, Yue & Huang, Yuan & Yu, Bin & Weng, Liguo & Liu, Junyong, 2023. "Deep reinforcement learning based optimal scheduling of active distribution system considering distributed generation, energy storage and flexible load," Energy, Elsevier, vol. 271(C).
    17. Yuhong Wang & Lei Chen & Hong Zhou & Xu Zhou & Zongsheng Zheng & Qi Zeng & Li Jiang & Liang Lu, 2021. "Flexible Transmission Network Expansion Planning Based on DQN Algorithm," Energies, MDPI, vol. 14(7), pages 1-21, April.
    18. Huang, Ruchen & He, Hongwen & Gao, Miaojue, 2023. "Training-efficient and cost-optimal energy management for fuel cell hybrid electric bus based on a novel distributed deep reinforcement learning framework," Applied Energy, Elsevier, vol. 346(C).
    19. Michelle M. LaMar, 2018. "Markov Decision Process Measurement Model," Psychometrika, Springer;The Psychometric Society, vol. 83(1), pages 67-88, March.
    20. Zichen Lu & Ying Yan, 2024. "Temperature Control of Fuel Cell Based on PEI-DDPG," Energies, MDPI, vol. 17(7), pages 1-19, April.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:ejores:v:310:y:2023:i:3:p:1179-1191. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/locate/eor .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.