IDEAS home Printed from https://ideas.repec.org/p/ecl/stabus/3963.html

Policy Learning with Adaptively Collected Data

Author

Listed:
  • Zhan, Ruohan

    (Institute for Computational and Mathematical Engineering, Stanford University)

  • Ren, Zhimei

    (Stanford University)

  • Athey, Susan

    (Stanford University)

  • Zhou, Zhengyuan

    (New York University)

Abstract

Learning optimal policies from historical data enables the gains from personalization to be realized in a wide variety of applications. The growing policy learning literature focuses on a setting where the treatment assignment policy does not adapt to the data. However, adaptive data collection is becoming more common in practice, from two primary sources: 1) data collected from adaptive experiments that are designed to improve inferential efficiency; 2) data collected from production systems that are adaptively evolving an operational policy to improve performance over time (e.g. contextual bandits). In this paper, we aim to address the challenge of learning the optimal policy with adaptively collected data and provide one of the first theoretical inquiries into this problem. We propose an algorithm based on generalized augmented inverse propensity weighted estimators and establish its finite-sample regret bound. We complement this regret upper bound with a lower bound that characterizes the fundamental difficulty of policy learning with adaptive data. Finally, we demonstrate our algorithm’s effectiveness using both synthetic data and public benchmark datasets.

Suggested Citation

  • Zhan, Ruohan & Ren, Zhimei & Athey, Susan & Zhou, Zhengyuan, 2021. "Policy Learning with Adaptively Collected Data," Research Papers 3963, Stanford University, Graduate School of Business.
  • Handle: RePEc:ecl:stabus:3963
    as

    Download full text from publisher

    File URL: https://www.gsb.stanford.edu/faculty-research/working-papers/policy-learning-adaptively-collected-data
    Download Restriction: no
    ---><---

    Other versions of this item:

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Masahiro Kato, 2025. "Semi-Supervised Treatment Effect Estimation with Unlabeled Covariates via Generalized Riesz Regression," Papers 2511.08303, arXiv.org.
    2. Hamsa Bastani & Osbert Bastani & Bryce McLaughlin, 2025. "Beating the Winner's Curse via Inference-Aware Policy Optimization," Papers 2510.18161, arXiv.org, revised Oct 2025.
    3. Masahiro Kato & Kyohei Okumura & Takuya Ishihara & Toru Kitagawa, 2024. "Adaptive Experimental Design for Policy Learning," Papers 2401.03756, arXiv.org, revised Jun 2025.
    4. Keshav Agrawal & Susan Athey & Ayush Kanodia & Emil Palikot, 2022. "Personalized Recommendations in EdTech: Evidence from a Randomized Controlled Trial," Papers 2208.13940, arXiv.org, revised Dec 2022.
    5. Shantanu Gupta & Zachary C. Lipton & David Childers, 2021. "Efficient Online Estimation of Causal Effects by Deciding What to Observe," Papers 2108.09265, arXiv.org, revised Oct 2021.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:ecl:stabus:3963. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: the person in charge (email available below). General contact details of provider: https://edirc.repec.org/data/gsstaus.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.