IDEAS home Printed from https://ideas.repec.org/a/plo/pone00/0251550.html
   My bibliography  Save this article

Deep reinforcement learning approaches for global public health strategies for COVID-19 pandemic

Author

Listed:
  • Gloria Hyunjung Kwak
  • Lowell Ling
  • Pan Hui

Abstract

Background: Unprecedented public health measures have been used during this coronavirus 2019 (COVID-19) pandemic to control the spread of SARS-CoV-2 virus. It is a challenge to implement timely and appropriate public health interventions. Methods and findings: Population and COVID-19 epidemiological data between 21st January 2020 to 15th November 2020 from 216 countries and territories were included with the implemented public health interventions. We used deep reinforcement learning, and the algorithm was trained to enable agents to try to find optimal public health strategies that maximized total reward on controlling the spread of COVID-19. The results suggested by the algorithm were analyzed against the actual timing and intensity of lockdown and travel restrictions. Early implementations of the actual lockdown and travel restriction policies, usually at the time of local index case were associated with less burden of COVID-19. In contrast, our agent suggested to initiate at least minimal intensity of lockdown or travel restriction even before or on the day of the index case in each country and territory. In addition, the agent mostly recommended a combination of lockdown and travel restrictions and higher intensity policies than the policies implemented by governments, but did not always encourage rapid full lockdown and full border closures. The limitation of this study was that it was done with incomplete data due to the emerging COVID-19 epidemic, inconsistent testing and reporting. In addition, our research focuses only on population health benefits by controlling the spread of COVID-19 without balancing the negative impacts of economic and social consequences. Interpretation: Compared to actual government implementation, our algorithm mostly recommended earlier intensity of lockdown and travel restrictions. Reinforcement learning may be used as a decision support tool for implementation of public health interventions during COVID-19 and future pandemics.

Suggested Citation

  • Gloria Hyunjung Kwak & Lowell Ling & Pan Hui, 2021. "Deep reinforcement learning approaches for global public health strategies for COVID-19 pandemic," PLOS ONE, Public Library of Science, vol. 16(5), pages 1-15, May.
  • Handle: RePEc:plo:pone00:0251550
    DOI: 10.1371/journal.pone.0251550
    as

    Download full text from publisher

    File URL: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0251550
    Download Restriction: no

    File URL: https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0251550&type=printable
    Download Restriction: no

    File URL: https://libkey.io/10.1371/journal.pone.0251550?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Meng, Xin & Guo, Mingxue & Gao, Ziyou & Kang, Liujiang, 2023. "Interaction between travel restriction policies and the spread of COVID-19," Transport Policy, Elsevier, vol. 136(C), pages 209-227.
    2. Fatemeh Navazi & Yufei Yuan & Norm Archer, 2022. "The effect of the Ontario stay-at-home order on Covid-19 third wave infections including vaccination considerations: An interrupted time series analysis," PLOS ONE, Public Library of Science, vol. 17(4), pages 1-18, April.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pone00:0251550. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: plosone (email available below). General contact details of provider: https://journals.plos.org/plosone/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.