IDEAS home Printed from https://ideas.repec.org/a/gam/jsusta/v14y2022i19p12033-d922987.html
   My bibliography  Save this article

A Q-Learning-Based Approximate Solving Algorithm for Vehicular Route Game

Author

Listed:
  • Le Zhang

    (School of Transport and Logistics, Guangzhou Railway Polytechnic, Guangzhou 510430, China)

  • Lijing Lyu

    (School of Management, Guangzhou Huali Science and Technology Vocational College, Guangzhou 511325, China)

  • Shanshui Zheng

    (School of Transport and Logistics, Guangzhou Railway Polytechnic, Guangzhou 510430, China)

  • Li Ding

    (School of Physics and Optoelectronics, South China University of Technology, Guangzhou 510630, China)

  • Lang Xu

    (School of Transport and Communications, Shanghai Maritime University, Shanghai 201306, China)

Abstract

Route game is recognized as an effective method to alleviate Braess’ paradox, which generates a new traffic congestion since numerous vehicles obey the same guidance from the selfish route guidance (such as Google Maps). The conventional route games have symmetry since vehicles’ payoffs depend only on the selected route distribution but not who chose, which leads to the precise Nash equilibrium being able to be solved by constructing a special potential function. However, with the arrival of smart cities, the real-time of route schemes is more of a concerned of engineers than the absolute optimality in real traffic. It is not an easy task to re-construct the new potential functions of the route games due to the dynamic traffic conditions. In this paper, compared with the hard-solvable potential function-based precise method, a matched Q-learning algorithm is designed to generate the approximate Nash equilibrium of the classic route game for real-time traffic. An experimental study shows that the Nash equilibrium coefficients generated by the Q-learning-based approximate solving algorithm all converge to 1.00, and still have the required convergence in the different traffic parameters.

Suggested Citation

  • Le Zhang & Lijing Lyu & Shanshui Zheng & Li Ding & Lang Xu, 2022. "A Q-Learning-Based Approximate Solving Algorithm for Vehicular Route Game," Sustainability, MDPI, vol. 14(19), pages 1-14, September.
  • Handle: RePEc:gam:jsusta:v:14:y:2022:i:19:p:12033-:d:922987
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2071-1050/14/19/12033/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2071-1050/14/19/12033/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Du, Lili & Han, Lanshan & Li, Xiang-Yang, 2014. "Distributed coordinated in-vehicle online routing using mixed-strategy congestion game," Transportation Research Part B: Methodological, Elsevier, vol. 67(C), pages 1-17.
    2. Milchtaich, Igal, 1996. "Congestion Games with Player-Specific Payoff Functions," Games and Economic Behavior, Elsevier, vol. 13(1), pages 111-124, March.
    3. Sam Ganzfried, 2021. "Algorithm for Computing Approximate Nash Equilibrium in Continuous Games with Application to Continuous Blotto," Games, MDPI, vol. 12(2), pages 1-11, June.
    4. Sam Ganzfried, 2020. "Algorithm for Computing Approximate Nash Equilibrium in Continuous Games with Application to Continuous Blotto," Papers 2006.07443, arXiv.org, revised Jun 2021.
    5. Suhan Wu & Min Luo & Jingxia Zhang & Daoheng Zhang & Lianmin Zhang, 2022. "Pharmaceutical Supply Chain in China: Pricing and Production Decisions with Price-Sensitive and Uncertain Demand," Sustainability, MDPI, vol. 14(13), pages 1-28, June.
    6. Du, Lili & Han, Lanshan & Chen, Shuwei, 2015. "Coordinated online in-vehicle routing balancing user optimality and system optimality through information perturbation," Transportation Research Part B: Methodological, Elsevier, vol. 79(C), pages 121-133.
    7. Tanzina Afrin & Nita Yodo, 2020. "A Survey of Road Traffic Congestion Measures towards a Sustainable and Resilient Transportation System," Sustainability, MDPI, vol. 12(11), pages 1-23, June.
    8. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    9. Tobias Harks & Max Klimm, 2012. "On the Existence of Pure Nash Equilibria in Weighted Congestion Games," Mathematics of Operations Research, INFORMS, vol. 37(3), pages 419-436, August.
    10. Hsiao-Hsien Lin & I-Cheng Hsu & Tzu-Yun Lin & Le-Ming Tung & Ying Ling, 2022. "After the Epidemic, Is the Smart Traffic Management System a Key Factor in Creating a Green Leisure and Tourism Environment in the Move towards Sustainable Urban Development?," Sustainability, MDPI, vol. 14(7), pages 1-22, March.
    11. Zhou, Bo & Song, Qiankun & Zhao, Zhenjiang & Liu, Tangzhi, 2020. "A reinforcement learning scheme for the equilibrium of the in-vehicle route choice problem based on congestion game," Applied Mathematics and Computation, Elsevier, vol. 371(C).
    12. Insaf Ullah & Muhammad Asghar Khan & Mohammed H. Alsharif & Rosdiadee Nordin, 2021. "An Anonymous Certificateless Signcryption Scheme for Secure and Efficient Deployment of Internet of Vehicles," Sustainability, MDPI, vol. 13(19), pages 1-19, September.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Ning, Yuqiang & Du, Lili, 2023. "Robust and resilient equilibrium routing mechanism for traffic congestion mitigation built upon correlated equilibrium and distributed optimization," Transportation Research Part B: Methodological, Elsevier, vol. 168(C), pages 170-205.
    2. Tami Tamir, 2023. "Cost-sharing games in real-time scheduling systems," International Journal of Game Theory, Springer;Game Theory Society, vol. 52(1), pages 273-301, March.
    3. Baiocchi, Andrea, 2016. "Analysis of timer-based message dissemination protocols for inter-vehicle communications," Transportation Research Part B: Methodological, Elsevier, vol. 90(C), pages 105-134.
    4. Pi, Xidong & Qian, Zhen (Sean), 2017. "A stochastic optimal control approach for real-time traffic routing considering demand uncertainties and travelers’ choice heterogeneity," Transportation Research Part B: Methodological, Elsevier, vol. 104(C), pages 710-732.
    5. Corine M. Laan & Judith Timmer & Richard J. Boucherie, 2021. "Non-cooperative queueing games on a network of single server queues," Queueing Systems: Theory and Applications, Springer, vol. 97(3), pages 279-301, April.
    6. Louis Abraham, 2023. "A Game of Competition for Risk," Working Papers hal-04112160, HAL.
    7. Harks, Tobias & von Falkenhausen, Philipp, 2014. "Optimal cost sharing for capacitated facility location games," European Journal of Operational Research, Elsevier, vol. 239(1), pages 187-198.
    8. João Ricardo Faria & Daniel Arce, 2022. "A Preface for the Special Issue “Economics of Conflict and Terrorism”," Games, MDPI, vol. 13(2), pages 1-2, April.
    9. Louis Abraham, 2023. "A Game of Competition for Risk," Papers 2305.18941, arXiv.org.
    10. Philipp von Falkenhausen & Tobias Harks, 2013. "Optimal Cost Sharing for Resource Selection Games," Mathematics of Operations Research, INFORMS, vol. 38(1), pages 184-208, February.
    11. Liu, Siyuan & Qu, Qiang, 2016. "Dynamic collective routing using crowdsourcing data," Transportation Research Part B: Methodological, Elsevier, vol. 93(PA), pages 450-469.
    12. Vasilis Gkatzelis & Konstantinos Kollias & Tim Roughgarden, 2016. "Optimal Cost-Sharing in General Resource Selection Games," Operations Research, INFORMS, vol. 64(6), pages 1230-1238, December.
    13. Tobias Harks & Max Klimm, 2016. "Congestion Games with Variable Demands," Mathematics of Operations Research, INFORMS, vol. 41(1), pages 255-277, February.
    14. Tulika Saha & Sriparna Saha & Pushpak Bhattacharyya, 2020. "Towards sentiment aided dialogue policy learning for multi-intent conversations using hierarchical reinforcement learning," PLOS ONE, Public Library of Science, vol. 15(7), pages 1-28, July.
    15. Veronika Harantová & Ambróz Hájnik & Alica Kalašová & Tomasz Figlus, 2022. "The Effect of the COVID-19 Pandemic on Traffic Flow Characteristics, Emissions Production and Fuel Consumption at a Selected Intersection in Slovakia," Energies, MDPI, vol. 15(6), pages 1-21, March.
    16. Mahmoud Mahfouz & Angelos Filos & Cyrine Chtourou & Joshua Lockhart & Samuel Assefa & Manuela Veloso & Danilo Mandic & Tucker Balch, 2019. "On the Importance of Opponent Modeling in Auction Markets," Papers 1911.12816, arXiv.org.
    17. Imen Azzouz & Wiem Fekih Hassen, 2023. "Optimization of Electric Vehicles Charging Scheduling Based on Deep Reinforcement Learning: A Decentralized Approach," Energies, MDPI, vol. 16(24), pages 1-18, December.
    18. Christian Ewerhart, 2020. "Ordinal potentials in smooth games," Economic Theory, Springer;Society for the Advancement of Economic Theory (SAET), vol. 70(4), pages 1069-1100, November.
    19. Jacob W. Crandall & Mayada Oudah & Tennom & Fatimah Ishowo-Oloko & Sherief Abdallah & Jean-François Bonnefon & Manuel Cebrian & Azim Shariff & Michael A. Goodrich & Iyad Rahwan, 2018. "Cooperating with machines," Nature Communications, Nature, vol. 9(1), pages 1-12, December.
      • Abdallah, Sherief & Bonnefon, Jean-François & Cebrian, Manuel & Crandall, Jacob W. & Ishowo-Oloko, Fatimah & Oudah, Mayada & Rahwan, Iyad & Shariff, Azim & Tennom,, 2017. "Cooperating with Machines," TSE Working Papers 17-806, Toulouse School of Economics (TSE).
      • Abdallah, Sherief & Bonnefon, Jean-François & Cebrian, Manuel & Crandall, Jacob W. & Ishowo-Oloko, Fatimah & Oudah, Mayada & Rahwan, Iyad & Shariff, Azim & Tennom,, 2017. "Cooperating with Machines," IAST Working Papers 17-68, Institute for Advanced Study in Toulouse (IAST).
      • Jacob Crandall & Mayada Oudah & Fatimah Ishowo-Oloko Tennom & Fatimah Ishowo-Oloko & Sherief Abdallah & Jean-François Bonnefon & Manuel Cebrian & Azim Shariff & Michael Goodrich & Iyad Rahwan, 2018. "Cooperating with machines," Post-Print hal-01897802, HAL.
    20. Sun, Alexander Y., 2020. "Optimal carbon storage reservoir management through deep reinforcement learning," Applied Energy, Elsevier, vol. 278(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jsusta:v:14:y:2022:i:19:p:12033-:d:922987. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.