IDEAS home Printed from https://ideas.repec.org/a/eee/phsmap/v571y2021ics0378437121001175.html
   My bibliography  Save this article

Deep reinforcement learning with a particle dynamics environment applied to emergency evacuation of a room with obstacles

Author

Listed:
  • Zhang, Yihao
  • Chai, Zhaojie
  • Lykotrafitis, George

Abstract

Efficient emergency evacuation is crucial for survival. A very successful model for simulating emergency evacuation is the social-force model. At the heart of the model is the self-driven force that is applied to an agent and is directed towards the exit. However, it is not clear if the application of this force results in optimal evacuation, especially in complex environments with obstacles. In this paper, we develop a deep reinforcement learning algorithm in association with the social force model to train agents to find the fastest evacuation path. During training, we penalize every step of an agent in the room and give zero reward at the exit. We adopt the Dyna-Q learning approach, which incorporates both the model-free Q-learning algorithm and the model-based reinforcement learning method, to update a deep neural network used to approximate the action value functions. We first show that in the case of a room without obstacles the resulting self-driven force points directly towards the exit as in the social force model. To quantitatively validate our method, we compare the total time elapsed when agents escape a room with one door and without obstacles employing the Dyna-Q model with the result obtained using the social-force model. We find that the median exit time intervals calculated using the two methods are not significantly different. We confirm that the proposed method obtains trajectories that minimize the travel time by comparing our results to results generated by geodesics-based adaptive pedestrian dynamics. Then, we investigate evacuation of a room with one obstacle and one exit. We show that our method produces similar results with the social force model when the obstacle is convex. However, in the case of concave obstacles, which sometimes can act as traps for agents governed purely by the social force model and prohibit complete room evacuation, our approach is clearly advantageous since it derives a policy that results in object avoidance and complete room evacuation without additional assumptions. We also study evacuation of a room with multiple exits. We show that agents are able to evacuate efficiently from the nearest exit through a shared network trained for a single agent. Finally, we test the robustness of the Dyna-Q learning approach in a complex environment with multiple exits and obstacles. Overall, we show that our model, based on the Dyna-Q reinforcement learning approach, can efficiently handle modeling of emergency evacuation in complex environments with multiple room exits and obstacles where it is difficult to obtain an intuitive rule for fast evacuation.

Suggested Citation

  • Zhang, Yihao & Chai, Zhaojie & Lykotrafitis, George, 2021. "Deep reinforcement learning with a particle dynamics environment applied to emergency evacuation of a room with obstacles," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 571(C).
  • Handle: RePEc:eee:phsmap:v:571:y:2021:i:c:s0378437121001175
    DOI: 10.1016/j.physa.2021.125845
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0378437121001175
    Download Restriction: Full text for ScienceDirect subscribers only. Journal offers the option of making the article available online on Science direct for a fee of $3,000

    File URL: https://libkey.io/10.1016/j.physa.2021.125845?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Ha, Vi & Lykotrafitis, George, 2012. "Agent-based modeling of a multi-room multi-floor building emergency evacuation," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 391(8), pages 2740-2751.
    2. Dirk Helbing & Illés Farkas & Tamás Vicsek, 2000. "Simulating dynamical features of escape panic," Nature, Nature, vol. 407(6803), pages 487-490, September.
    3. Manxia Liu & Weiliang Zeng & Peng Chen & Xuyi Wu, 2017. "A microscopic simulation model for pedestrian-pedestrian and pedestrian-vehicle interactions at crosswalks," PLOS ONE, Public Library of Science, vol. 12(7), pages 1-23, July.
    4. Hoogendoorn, S. P. & Bovy, P. H. L., 2004. "Pedestrian route-choice and activity scheduling theory and models," Transportation Research Part B: Methodological, Elsevier, vol. 38(2), pages 169-190, February.
    5. David Silver & Julian Schrittwieser & Karen Simonyan & Ioannis Antonoglou & Aja Huang & Arthur Guez & Thomas Hubert & Lucas Baker & Matthew Lai & Adrian Bolton & Yutian Chen & Timothy Lillicrap & Fan , 2017. "Mastering the game of Go without human knowledge," Nature, Nature, vol. 550(7676), pages 354-359, October.
    6. Helbing, Dirk, 1993. "Boltzmann-like and Boltzmann-Fokker-Planck equations as a foundation of behavioral models," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 196(4), pages 546-573.
    7. Johansson, Fredrik & Peterson, Anders & Tapani, Andreas, 2015. "Waiting pedestrians in the social force model," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 419(C), pages 95-107.
    8. Song, Xiao & Ma, Liang & Ma, Yaofei & Yang, Chen & Ji, Hang, 2016. "Selfishness- and Selflessness-based models of pedestrian room evacuation," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 447(C), pages 455-466.
    9. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    10. David Silver & Aja Huang & Chris J. Maddison & Arthur Guez & Laurent Sifre & George van den Driessche & Julian Schrittwieser & Ioannis Antonoglou & Veda Panneershelvam & Marc Lanctot & Sander Dieleman, 2016. "Mastering the game of Go with deep neural networks and tree search," Nature, Nature, vol. 529(7587), pages 484-489, January.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Zhang, Ke & Lin, Xi & Li, Meng, 2023. "Graph attention reinforcement learning with flexible matching policies for multi-depot vehicle routing problems," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 611(C).
    2. Guo, Kai & Zhang, Limao, 2022. "Adaptive multi-objective optimization for emergency evacuation at metro stations," Reliability Engineering and System Safety, Elsevier, vol. 219(C).

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Yifeng Guo & Xingyu Fu & Yuyan Shi & Mingwen Liu, 2018. "Robust Log-Optimal Strategy with Reinforcement Learning," Papers 1805.00205, arXiv.org.
    2. Johansson, Fredrik & Peterson, Anders & Tapani, Andreas, 2015. "Waiting pedestrians in the social force model," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 419(C), pages 95-107.
    3. Iwao Maeda & David deGraw & Michiharu Kitano & Hiroyasu Matsushima & Hiroki Sakaji & Kiyoshi Izumi & Atsuo Kato, 2020. "Deep Reinforcement Learning in Agent Based Financial Market Simulation," JRFM, MDPI, vol. 13(4), pages 1-17, April.
    4. Li, Wenqing & Ni, Shaoquan, 2022. "Train timetabling with the general learning environment and multi-agent deep reinforcement learning," Transportation Research Part B: Methodological, Elsevier, vol. 157(C), pages 230-251.
    5. Li, Maosheng & Shu, Panpan & Xiao, Yao & Wang, Pu, 2021. "Modeling detour decision combined the tactical and operational layer based on perceived density," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 574(C).
    6. Bo Hu & Jiaxi Li & Shuang Li & Jie Yang, 2019. "A Hybrid End-to-End Control Strategy Combining Dueling Deep Q-network and PID for Transient Boost Control of a Diesel Engine with Variable Geometry Turbocharger and Cooled EGR," Energies, MDPI, vol. 12(19), pages 1-15, September.
    7. De Moor, Bram J. & Gijsbrechts, Joren & Boute, Robert N., 2022. "Reward shaping to improve the performance of deep reinforcement learning in perishable inventory management," European Journal of Operational Research, Elsevier, vol. 301(2), pages 535-545.
    8. Christopher R. Madan, 2020. "Considerations for Comparing Video Game AI Agents with Humans," Challenges, MDPI, vol. 11(2), pages 1-12, August.
    9. Qu, Xiaobo & Yu, Yang & Zhou, Mofan & Lin, Chin-Teng & Wang, Xiangyu, 2020. "Jointly dampening traffic oscillations and improving energy consumption with electric, connected and automated vehicles: A reinforcement learning based approach," Applied Energy, Elsevier, vol. 257(C).
    10. Matt Taddy, 2018. "The Technological Elements of Artificial Intelligence," NBER Chapters, in: The Economics of Artificial Intelligence: An Agenda, pages 61-87, National Bureau of Economic Research, Inc.
    11. Jermain C. Kaminski & Christian Hopp, 2020. "Predicting outcomes in crowdfunding campaigns with textual, visual, and linguistic signals," Small Business Economics, Springer, vol. 55(3), pages 627-649, October.
    12. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    13. Weifan Long & Taixian Hou & Xiaoyi Wei & Shichao Yan & Peng Zhai & Lihua Zhang, 2023. "A Survey on Population-Based Deep Reinforcement Learning," Mathematics, MDPI, vol. 11(10), pages 1-17, May.
    14. Yuchao Dong, 2022. "Randomized Optimal Stopping Problem in Continuous time and Reinforcement Learning Algorithm," Papers 2208.02409, arXiv.org, revised Sep 2023.
    15. Shijun Wang & Baocheng Zhu & Chen Li & Mingzhe Wu & James Zhang & Wei Chu & Yuan Qi, 2020. "Riemannian Proximal Policy Optimization," Computer and Information Science, Canadian Center of Science and Education, vol. 13(3), pages 1-93, August.
    16. Elsisi, Mahmoud & Amer, Mohammed & Dababat, Alya’ & Su, Chun-Lien, 2023. "A comprehensive review of machine learning and IoT solutions for demand side energy management, conservation, and resilient operation," Energy, Elsevier, vol. 281(C).
    17. Lai, Jianfa & Weng, Lin-Chen & Peng, Xiaoling & Fang, Kai-Tai, 2022. "Construction of symmetric orthogonal designs with deep Q-network and orthogonal complementary design," Computational Statistics & Data Analysis, Elsevier, vol. 171(C).
    18. Ricardo S. Alonso & Inés Sittón-Candanedo & Roberto Casado-Vara & Javier Prieto & Juan M. Corchado, 2020. "Deep Reinforcement Learning for the Management of Software-Defined Networks and Network Function Virtualization in an Edge-IoT Architecture," Sustainability, MDPI, vol. 12(14), pages 1-23, July.
    19. Lu, Peng & Wen, Feier & Li, Yan & Chen, Dianhan, 2021. "Multi-agent modeling of crowd dynamics under mass shooting cases," Chaos, Solitons & Fractals, Elsevier, vol. 153(P2).
    20. Zechu Li & Xiao-Yang Liu & Jiahao Zheng & Zhaoran Wang & Anwar Walid & Jian Guo, 2021. "FinRL-Podracer: High Performance and Scalable Deep Reinforcement Learning for Quantitative Finance," Papers 2111.05188, arXiv.org.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:phsmap:v:571:y:2021:i:c:s0378437121001175. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.journals.elsevier.com/physica-a-statistical-mechpplications/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.