IDEAS home Printed from https://ideas.repec.org/p/zbw/irtgdp/2019026.html
   My bibliography  Save this paper

Affordable Uplift: Supervised Randomization in Controlled Exprtiments

Author

Listed:
  • Haupt, Johannes
  • Jacob, Daniel
  • Gubela, Robin M.
  • Lessmann, Stefan

Abstract

Customer scoring models are the core of scalable direct marketing. Uplift models provide an estimate of the incremental benefit from a treatment that is used for operational decision-making. Training and monitoring of uplift models require experimental data. However, the collection of data under randomized treatment assignment is costly, since random targeting deviates from an established targeting policy. To increase the cost-efficiency of experimentation and facilitate frequent data collection and model training, we introduce supervised randomization. It is a novel approach that integrates existing scoring models into randomized trials to target relevant customers, while ensuring consistent estimates of treatment effects through correction for active sample selection. An empirical Monte Carlo study shows that data collection under supervised randomization is cost-efficient, while downstream uplift models perform competitively.

Suggested Citation

  • Haupt, Johannes & Jacob, Daniel & Gubela, Robin M. & Lessmann, Stefan, 2019. "Affordable Uplift: Supervised Randomization in Controlled Exprtiments," IRTG 1792 Discussion Papers 2019-026, Humboldt University of Berlin, International Research Training Group 1792 "High Dimensional Nonstationary Time Series".
  • Handle: RePEc:zbw:irtgdp:2019026
    as

    Download full text from publisher

    File URL: https://www.econstor.eu/bitstream/10419/230802/1/irtg1792dp2019-026.pdf
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Eric M. Schwartz & Eric T. Bradlow & Peter S. Fader, 2017. "Customer Acquisition via Display Advertising Using Multi-Armed Bandit Experiments," Marketing Science, INFORMS, vol. 36(4), pages 500-522, July.
    2. Brett R. Gordon & Florian Zettelmeyer & Neha Bhargava & Dan Chapsky, 2019. "A Comparison of Approaches to Advertising Measurement: Evidence from Big Field Experiments at Facebook," Marketing Science, INFORMS, vol. 38(2), pages 193-225, March.
    3. Michael C Knaus & Michael Lechner & Anthony Strittmatter, 2021. "Machine learning estimation of heterogeneous causal effects: Empirical Monte Carlo evidence," The Econometrics Journal, Royal Economic Society, vol. 24(1), pages 134-161.
    4. Marco Caliendo & Michel Clement & Dominik Papies & Sabine Scheel-Kopeinig, 2012. "Research Note ---The Cost Impact of Spam Filters: Measuring the Effect of Information System Technologies in Organizations," Information Systems Research, INFORMS, vol. 23(3-part-2), pages 1068-1080, September.
    5. Guido W. Imbens & Jeffrey M. Wooldridge, 2009. "Recent Developments in the Econometrics of Program Evaluation," Journal of Economic Literature, American Economic Association, vol. 47(1), pages 5-86, March.
    6. Robin Gubela & Artem Bequé & Stefan Lessmann & Fabian Gebert, 2019. "Conversion Uplift in E-Commerce: A Systematic Benchmark of Modeling Strategies," International Journal of Information Technology & Decision Making (IJITDM), World Scientific Publishing Co. Pte. Ltd., vol. 18(03), pages 747-791, May.
    7. Athey, Susan & Wager, Stefan, 2017. "Efficient Policy Learning," Research Papers 3506, Stanford University, Graduate School of Business.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Jacob, Daniel & Härdle, Wolfgang Karl & Lessmann, Stefan, 2019. "Group Average Treatment Effects for Observational Studies," IRTG 1792 Discussion Papers 2019-028, Humboldt University of Berlin, International Research Training Group 1792 "High Dimensional Nonstationary Time Series".

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Johannes Haupt & Stefan Lessmann, 2020. "Targeting customers under response-dependent costs," Papers 2003.06271, arXiv.org, revised Aug 2021.
    2. Michael C. Knaus, 2021. "A double machine learning approach to estimate the effects of musical practice on student’s skills," Journal of the Royal Statistical Society Series A, Royal Statistical Society, vol. 184(1), pages 282-300, January.
    3. Michael C Knaus, 2022. "Double machine learning-based programme evaluation under unconfoundedness [Econometric methods for program evaluation]," The Econometrics Journal, Royal Economic Society, vol. 25(3), pages 602-627.
    4. Denis Fougère & Nicolas Jacquemet, 2020. "Policy Evaluation Using Causal Inference Methods," SciencePo Working papers Main hal-03455978, HAL.
    5. Cockx, Bart & Lechner, Michael & Bollens, Joost, 2023. "Priority to unemployed immigrants? A causal machine learning evaluation of training in Belgium," Labour Economics, Elsevier, vol. 80(C).
    6. Daniel Boller & Michael Lechner & Gabriel Okasa, 2021. "The Effect of Sport in Online Dating: Evidence from Causal Machine Learning," Papers 2104.04601, arXiv.org.
    7. Daniel Jacob, 2021. "CATE meets ML -- The Conditional Average Treatment Effect and Machine Learning," Papers 2104.09935, arXiv.org, revised Apr 2021.
    8. Haupt, Johannes & Lessmann, Stefan, 2020. "Targeting Cutsomers Under Response-Dependent Costs," IRTG 1792 Discussion Papers 2020-005, Humboldt University of Berlin, International Research Training Group 1792 "High Dimensional Nonstationary Time Series".
    9. Garrett Johnson & Julian Runge & Eric Seufert, 2022. "Privacy-Centric Digital Advertising: Implications for Research," Customer Needs and Solutions, Springer;Institute for Sustainable Innovation and Growth (iSIG), vol. 9(1), pages 49-54, June.
    10. Huber, Martin, 2019. "An introduction to flexible methods for policy evaluation," FSES Working Papers 504, Faculty of Economics and Social Sciences, University of Freiburg/Fribourg Switzerland.
    11. Hana Choi & Carl F. Mela & Santiago R. Balseiro & Adam Leary, 2020. "Online Display Advertising Markets: A Literature Review and Future Directions," Information Systems Research, INFORMS, vol. 31(2), pages 556-575, June.
    12. Goller, Daniel & Harrer, Tamara & Lechner, Michael & Wolff, Joachim, 2021. "Active labour market policies for the long-term unemployed: New evidence from causal machine learning," Economics Working Paper Series 2108, University of St. Gallen, School of Economics and Political Science.
    13. Nathan Kallus, 2022. "Treatment Effect Risk: Bounds and Inference," Papers 2201.05893, arXiv.org, revised Jul 2022.
    14. Brett R. Gordon & Robert Moakler & Florian Zettelmeyer, 2022. "Close Enough? A Large-Scale Exploration of Non-Experimental Approaches to Advertising Measurement," Papers 2201.07055, arXiv.org, revised Oct 2022.
    15. Brett R Gordon & Kinshuk Jerath & Zsolt Katona & Sridhar Narayanan & Jiwoong Shin & Kenneth C Wilbur, 2019. "Inefficiencies in Digital Advertising Markets," Papers 1912.09012, arXiv.org, revised Feb 2020.
    16. Brett R. Gordon & Robert Moakler & Florian Zettelmeyer, 2023. "Close Enough? A Large-Scale Exploration of Non-Experimental Approaches to Advertising Measurement," Marketing Science, INFORMS, vol. 42(4), pages 768-793, July.
    17. Roberto Esposti, 2022. "The Coevolution of Policy Support and Farmers' Behaviour. An investigation on Italian agriculture over the 2008-2019 period," Working Papers 464, Universita' Politecnica delle Marche (I), Dipartimento di Scienze Economiche e Sociali.
    18. Strittmatter, Anthony, 2023. "What is the value added by using causal machine learning methods in a welfare experiment evaluation?," Labour Economics, Elsevier, vol. 84(C).
    19. Nora Bearth & Michael Lechner, 2024. "Causal Machine Learning for Moderation Effects," Papers 2401.08290, arXiv.org.
    20. Ratchford, Brian & Soysal, Gonca & Zentner, Alejandro & Gauri, Dinesh K., 2022. "Online and offline retailing: What we know and directions for future research," Journal of Retailing, Elsevier, vol. 98(1), pages 152-177.

    More about this item

    Keywords

    Uplift Modeling; Causal Inference; Experimental Design; Selection Bias;
    All these keywords.

    JEL classification:

    • C00 - Mathematical and Quantitative Methods - - General - - - General

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:zbw:irtgdp:2019026. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: ZBW - Leibniz Information Centre for Economics (email available below). General contact details of provider: https://edirc.repec.org/data/wfhubde.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.