IDEAS home Printed from https://ideas.repec.org/a/eee/ejores/v310y2023i3p1179-1191.html
   My bibliography  Save this article

Navigational guidance – A deep learning approach

Author

Listed:
  • Yen, Benjamin P.-C.
  • Luo, Yu

Abstract

This paper addresses the navigation problems facing many companies, including logistics companies, couriers, and Uber, helping users find the best route to multiple destinations in the shortest amount of time. Navigation problems involving multiple destinations are formulated in this paper as Directed Steiner Tree (DST) problems on directed graphs. We propose an end-to-end deep learning approach to tackle the DST problems in a supervised and non-autoregressive manner. The core of our approach is Graph Neural Networks (GNNs) in estimating whether a node belongs to the optimal solution. Experiments are conducted to evaluate the proposed approach, and the results suggest that using our approach can effectively solve the DST problems with at least 95.04% accuracy. Compared to solving DST problems using traditional methods, our approach significantly improves the solvability of DST problems with acceptable execution time. We further explore how our approach can be applied to different scenarios, such as large-scale graphs. Moreover, we show that our approach can be smoothly applied to solve several variants of the Steiner Tree problem, including Steiner Forest problems. In summary, the proposed approach shows promising results and can be implemented in practice. Research limitations and future directions are also discussed.

Suggested Citation

  • Yen, Benjamin P.-C. & Luo, Yu, 2023. "Navigational guidance – A deep learning approach," European Journal of Operational Research, Elsevier, vol. 310(3), pages 1179-1191.
  • Handle: RePEc:eee:ejores:v:310:y:2023:i:3:p:1179-1191
    DOI: 10.1016/j.ejor.2023.04.020
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0377221723003041
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.ejor.2023.04.020?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Gah-Yi Ban & Cynthia Rudin, 2019. "The Big Data Newsvendor: Practical Insights from Machine Learning," Operations Research, INFORMS, vol. 67(1), pages 90-108, January.
    2. De Moor, Bram J. & Gijsbrechts, Joren & Boute, Robert N., 2022. "Reward shaping to improve the performance of deep reinforcement learning in perishable inventory management," European Journal of Operational Research, Elsevier, vol. 301(2), pages 535-545.
    3. Kallestad, Jakob & Hasibi, Ramin & Hemmati, Ahmad & Sörensen, Kenneth, 2023. "A general deep reinforcement learning hyperheuristic framework for solving combinatorial optimization problems," European Journal of Operational Research, Elsevier, vol. 309(1), pages 446-468.
    4. Kraus, Mathias & Feuerriegel, Stefan & Oztekin, Asil, 2020. "Deep learning in business analytics and operations research: Models, applications and managerial implications," European Journal of Operational Research, Elsevier, vol. 281(3), pages 628-641.
    5. Sigrist, Fabio & Leuenberger, Nicola, 2023. "Machine learning for corporate default risk: Multi-period prediction, frailty correlation, loan portfolios, and tail probabilities," European Journal of Operational Research, Elsevier, vol. 305(3), pages 1390-1406.
    6. McHale, Ian G. & Holmes, Benjamin, 2023. "Estimating transfer fees of professional footballers using advanced performance metrics and machine learning," European Journal of Operational Research, Elsevier, vol. 306(1), pages 389-399.
    7. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    8. Dimitri Watel & Marc-Antoine Weisser, 2016. "A practical greedy approximation for the directed Steiner tree problem," Journal of Combinatorial Optimization, Springer, vol. 32(4), pages 1327-1370, November.
    9. Philipp Borchert & Kristof Coussement & Arno de Caigny & Jochen de Weerdt, 2023. "Extending business failure prediction models with textual website content using deep learning," Post-Print hal-03976762, HAL.
    10. Meng Qi & Yuanyuan Shi & Yongzhi Qi & Chenxin Ma & Rong Yuan & Di Wu & Zuo-Jun (Max) Shen, 2023. "A Practical End-to-End Inventory Management Model with Deep Learning," Management Science, INFORMS, vol. 69(2), pages 759-773, February.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Erkip, Nesim Kohen, 2023. "Can accessing much data reshape the theory? Inventory theory under the challenge of data-driven systems," European Journal of Operational Research, Elsevier, vol. 308(3), pages 949-959.
    2. Koen W. de Bock & Kristof Coussement & Arno De Caigny & Roman Slowiński & Bart Baesens & Robert N Boute & Tsan-Ming Choi & Dursun Delen & Mathias Kraus & Stefan Lessmann & Sebastián Maldonado & David , 2023. "Explainable AI for Operational Research: A Defining Framework, Methods, Applications, and a Research Agenda," Post-Print hal-04219546, HAL.
    3. Pournader, Mehrdokht & Ghaderi, Hadi & Hassanzadegan, Amir & Fahimnia, Behnam, 2021. "Artificial intelligence applications in supply chain management," International Journal of Production Economics, Elsevier, vol. 241(C).
    4. Vairetti, Carla & Aránguiz, Ignacio & Maldonado, Sebastián & Karmy, Juan Pablo & Leal, Alonso, 2024. "Analytics-driven complaint prioritisation via deep learning and multicriteria decision-making," European Journal of Operational Research, Elsevier, vol. 312(3), pages 1108-1118.
    5. Tulika Saha & Sriparna Saha & Pushpak Bhattacharyya, 2020. "Towards sentiment aided dialogue policy learning for multi-intent conversations using hierarchical reinforcement learning," PLOS ONE, Public Library of Science, vol. 15(7), pages 1-28, July.
    6. Xi Chen & Zachary Owen & Clark Pixton & David Simchi-Levi, 2022. "A Statistical Learning Approach to Personalization in Revenue Management," Management Science, INFORMS, vol. 68(3), pages 1923-1937, March.
    7. Stefan Feuerriegel & Mateusz Dolata & Gerhard Schwabe, 2020. "Fair AI," Business & Information Systems Engineering: The International Journal of WIRTSCHAFTSINFORMATIK, Springer;Gesellschaft für Informatik e.V. (GI), vol. 62(4), pages 379-384, August.
    8. Mahmoud Mahfouz & Angelos Filos & Cyrine Chtourou & Joshua Lockhart & Samuel Assefa & Manuela Veloso & Danilo Mandic & Tucker Balch, 2019. "On the Importance of Opponent Modeling in Auction Markets," Papers 1911.12816, arXiv.org.
    9. Gahm, Christian & Uzunoglu, Aykut & Wahl, Stefan & Ganschinietz, Chantal & Tuma, Axel, 2022. "Applying machine learning for the anticipation of complex nesting solutions in hierarchical production planning," European Journal of Operational Research, Elsevier, vol. 296(3), pages 819-836.
    10. Woo Jae Byun & Bumkyu Choi & Seongmin Kim & Joohyun Jo, 2023. "Practical Application of Deep Reinforcement Learning to Optimal Trade Execution," FinTech, MDPI, vol. 2(3), pages 1-16, June.
    11. Lu, Yu & Xiang, Yue & Huang, Yuan & Yu, Bin & Weng, Liguo & Liu, Junyong, 2023. "Deep reinforcement learning based optimal scheduling of active distribution system considering distributed generation, energy storage and flexible load," Energy, Elsevier, vol. 271(C).
    12. Yuhong Wang & Lei Chen & Hong Zhou & Xu Zhou & Zongsheng Zheng & Qi Zeng & Li Jiang & Liang Lu, 2021. "Flexible Transmission Network Expansion Planning Based on DQN Algorithm," Energies, MDPI, vol. 14(7), pages 1-21, April.
    13. Michelle M. LaMar, 2018. "Markov Decision Process Measurement Model," Psychometrika, Springer;The Psychometric Society, vol. 83(1), pages 67-88, March.
    14. Yang, Ting & Zhao, Liyuan & Li, Wei & Zomaya, Albert Y., 2021. "Dynamic energy dispatch strategy for integrated energy system based on improved deep reinforcement learning," Energy, Elsevier, vol. 235(C).
    15. Wang, Yi & Qiu, Dawei & Sun, Mingyang & Strbac, Goran & Gao, Zhiwei, 2023. "Secure energy management of multi-energy microgrid: A physical-informed safe reinforcement learning approach," Applied Energy, Elsevier, vol. 335(C).
    16. Meng Qi & Ying Cao & Zuo-Jun (Max) Shen, 2022. "Distributionally Robust Conditional Quantile Prediction with Fixed Design," Management Science, INFORMS, vol. 68(3), pages 1639-1658, March.
    17. Kaffash, Sepideh & Nguyen, An Truong & Zhu, Joe, 2021. "Big data algorithms and applications in intelligent transportation system: A review and bibliometric analysis," International Journal of Production Economics, Elsevier, vol. 231(C).
    18. Neha Soni & Enakshi Khular Sharma & Narotam Singh & Amita Kapoor, 2019. "Impact of Artificial Intelligence on Businesses: from Research, Innovation, Market Deployment to Future Shifts in Business Models," Papers 1905.02092, arXiv.org.
    19. Sun, Hongchang & Niu, Yanlei & Li, Chengdong & Zhou, Changgeng & Zhai, Wenwen & Chen, Zhe & Wu, Hao & Niu, Lanqiang, 2022. "Energy consumption optimization of building air conditioning system via combining the parallel temporal convolutional neural network and adaptive opposition-learning chimp algorithm," Energy, Elsevier, vol. 259(C).
    20. Zhang, Yang & Yang, Qingyu & Li, Donghe & An, Dou, 2022. "A reinforcement and imitation learning method for pricing strategy of electricity retailer with customers’ flexibility," Applied Energy, Elsevier, vol. 323(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:ejores:v:310:y:2023:i:3:p:1179-1191. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/locate/eor .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.