IDEAS home Printed from https://ideas.repec.org/a/plo/pone00/0102434.html
   My bibliography  Save this article

Efficient Use of Information in Adaptive Management with an Application to Managing Recreation near Golden Eagle Nesting Sites

Author

Listed:
  • Paul L Fackler
  • Krishna Pacifici
  • Julien Martin
  • Carol McIntyre

Abstract

It is generally the case that a significant degree of uncertainty exists concerning the behavior of ecological systems. Adaptive management has been developed to address such structural uncertainty, while recognizing that decisions must be made without full knowledge of how a system behaves. This paradigm attempts to use new information that develops during the course of management to learn how the system works. To date, however, adaptive management has used a very limited information set to characterize the learning that is possible. This paper uses an extension of the Partial Observable Markov Decision Process (POMDP) framework to expand the information set used to update belief in competing models. This feature can potentially increase the speed of learning through adaptive management, and lead to better management in the future. We apply this framework to a case study wherein interest lies in managing recreational restrictions around golden eagle (Aquila chrysaetos) nesting sites. The ultimate management objective is to maintain an abundant eagle population in Denali National Park while minimizing the regulatory burden on park visitors. In order to capture this objective, we developed a utility function that trades off expected breeding success with hiker access. Our work is relevant to the management of human activities in protected areas, but more generally demonstrates some of the benefits of POMDP in the context of adaptive management.

Suggested Citation

  • Paul L Fackler & Krishna Pacifici & Julien Martin & Carol McIntyre, 2014. "Efficient Use of Information in Adaptive Management with an Application to Managing Recreation near Golden Eagle Nesting Sites," PLOS ONE, Public Library of Science, vol. 9(8), pages 1-14, August.
  • Handle: RePEc:plo:pone00:0102434
    DOI: 10.1371/journal.pone.0102434
    as

    Download full text from publisher

    File URL: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0102434
    Download Restriction: no

    File URL: https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0102434&type=printable
    Download Restriction: no

    File URL: https://libkey.io/10.1371/journal.pone.0102434?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. George E. Monahan, 1982. "State of the Art---A Survey of Partially Observable Markov Decision Processes: Theory, Models, and Algorithms," Management Science, INFORMS, vol. 28(1), pages 1-16, January.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Baggio, Michele & Fackler, Paul L., 2016. "Optimal management with reversible regime shifts," Journal of Economic Behavior & Organization, Elsevier, vol. 132(PB), pages 124-136.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Wooseung Jang & J. George Shanthikumar, 2002. "Stochastic allocation of inspection capacity to competitive processes," Naval Research Logistics (NRL), John Wiley & Sons, vol. 49(1), pages 78-94, February.
    2. Dinah Rosenberg & Eilon Solan & Nicolas Vieille, 2009. "Protocols with No Acknowledgment," Operations Research, INFORMS, vol. 57(4), pages 905-915, August.
    3. Kazmi, Hussain & Suykens, Johan & Balint, Attila & Driesen, Johan, 2019. "Multi-agent reinforcement learning for modeling and control of thermostatically controlled loads," Applied Energy, Elsevier, vol. 238(C), pages 1022-1035.
    4. Williams, Byron K., 2009. "Markov decision processes in natural resources management: Observability and uncertainty," Ecological Modelling, Elsevier, vol. 220(6), pages 830-840.
    5. Xin Jin, 2021. "Can we imitate the principal investor's behavior to learn option price?," Papers 2105.11376, arXiv.org, revised Jan 2022.
    6. Yanling Chang & Alan Erera & Chelsea White, 2015. "Value of information for a leader–follower partially observed Markov game," Annals of Operations Research, Springer, vol. 235(1), pages 129-153, December.
    7. Churlzu Lim & J. Neil Bearden & J. Cole Smith, 2006. "Sequential Search with Multiattribute Options," Decision Analysis, INFORMS, vol. 3(1), pages 3-15, March.
    8. Anyan Qi & Hyun-Soo Ahn & Amitabh Sinha, 2017. "Capacity Investment with Demand Learning," Operations Research, INFORMS, vol. 65(1), pages 145-164, February.
    9. Chiel van Oosterom & Lisa M. Maillart & Jeffrey P. Kharoufeh, 2017. "Optimal maintenance policies for a safety‐critical system and its deteriorating sensor," Naval Research Logistics (NRL), John Wiley & Sons, vol. 64(5), pages 399-417, August.
    10. Ciriaco Valdez‐Flores & Richard M. Feldman, 1989. "A survey of preventive maintenance models for stochastically deteriorating single‐unit systems," Naval Research Logistics (NRL), John Wiley & Sons, vol. 36(4), pages 419-446, August.
    11. Grosfeld-Nir, Abraham, 2007. "Control limits for two-state partially observable Markov decision processes," European Journal of Operational Research, Elsevier, vol. 182(1), pages 300-304, October.
    12. Malek Ebadi & Raha Akhavan-Tabatabaei, 2021. "Personalized Cotesting Policies for Cervical Cancer Screening: A POMDP Approach," Mathematics, MDPI, vol. 9(6), pages 1-20, March.
    13. Tianhu Deng & Zuo-Jun Max Shen & J. George Shanthikumar, 2014. "Statistical Learning of Service-Dependent Demand in a Multiperiod Newsvendor Setting," Operations Research, INFORMS, vol. 62(5), pages 1064-1076, October.
    14. Zong-Zhi Lin & James C. Bean & Chelsea C. White, 2004. "A Hybrid Genetic/Optimization Algorithm for Finite-Horizon, Partially Observed Markov Decision Processes," INFORMS Journal on Computing, INFORMS, vol. 16(1), pages 27-38, February.
    15. Kıvanç, İpek & Özgür-Ünlüakın, Demet & Bilgiç, Taner, 2022. "Maintenance policy analysis of the regenerative air heater system using factored POMDPs," Reliability Engineering and System Safety, Elsevier, vol. 219(C).
    16. T Sloan, 2010. "First, do no harm? A framework for evaluating new versus reprocessed medical devices," Journal of the Operational Research Society, Palgrave Macmillan;The OR Society, vol. 61(2), pages 191-201, February.
    17. Yanling Chang & Alan Erera & Chelsea White, 2015. "A leader–follower partially observed, multiobjective Markov game," Annals of Operations Research, Springer, vol. 235(1), pages 103-128, December.
    18. İ. Esra Büyüktahtakın & Robert G. Haight, 2018. "A review of operations research models in invasive species management: state of the art, challenges, and future directions," Annals of Operations Research, Springer, vol. 271(2), pages 357-403, December.
    19. Hao Zhang, 2010. "Partially Observable Markov Decision Processes: A Geometric Technique and Analysis," Operations Research, INFORMS, vol. 58(1), pages 214-228, February.
    20. Jue Wang, 2016. "Minimizing the false alarm rate in systems with transient abnormality," Naval Research Logistics (NRL), John Wiley & Sons, vol. 63(4), pages 320-334, June.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pone00:0102434. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: plosone (email available below). General contact details of provider: https://journals.plos.org/plosone/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.