IDEAS home Printed from https://ideas.repec.org/a/inm/ormksc/v38y2019i2p193-225.html
   My bibliography  Save this article

A Comparison of Approaches to Advertising Measurement: Evidence from Big Field Experiments at Facebook

Author

Listed:
  • Brett R. Gordon

    (Kellogg School of Management, Northwestern University, Evanston, Illinois 60208)

  • Florian Zettelmeyer

    (Kellogg School of Management, Northwestern University, Evanston, Illinois 60208; National Bureau of Economic Research, Cambridge, Massachusetts 02138)

  • Neha Bhargava

    (Facebook Inc., Menlo Park, California 94025)

  • Dan Chapsky

    (Facebook Inc., Menlo Park, California 94025)

Abstract

Measuring the causal effects of digital advertising remains challenging despite the availability of granular data. Unobservable factors make exposure endogenous, and advertising’s effect on outcomes tends to be small. In principle, these concerns could be addressed using randomized controlled trials (RCTs). In practice, few online ad campaigns rely on RCTs and instead use observational methods to estimate ad effects. We assess empirically whether the variation in data typically available in the advertising industry enables observational methods to recover the causal effects of online advertising. Using data from 15 U.S. advertising experiments at Facebook comprising 500 million user-experiment observations and 1.6 billion ad impressions, we contrast the experimental results to those obtained from multiple observational models. The observational methods often fail to produce the same effects as the randomized experiments, even after conditioning on extensive demographic and behavioral variables. In our setting, advances in causal inference methods do not allow us to isolate the exogenous variation needed to estimate the treatment effects. We also characterize the incremental explanatory power our data would require to enable observational methods to successfully measure advertising effects. Our findings suggest that commonly used observational approaches based on the data usually available in the industry often fail to accurately measure the true effect of advertising.

Suggested Citation

  • Brett R. Gordon & Florian Zettelmeyer & Neha Bhargava & Dan Chapsky, 2019. "A Comparison of Approaches to Advertising Measurement: Evidence from Big Field Experiments at Facebook," Marketing Science, INFORMS, vol. 38(2), pages 193-225, March.
  • Handle: RePEc:inm:ormksc:v:38:y:2019:i:2:p:193-225
    DOI: 10.1287/mksc.2018.1135
    as

    Download full text from publisher

    File URL: https://doi.org/10.1287/mksc.2018.1135
    Download Restriction: no

    File URL: https://libkey.io/10.1287/mksc.2018.1135?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. LaLonde, Robert J, 1986. "Evaluating the Econometric Evaluations of Training Programs with Experimental Data," American Economic Review, American Economic Association, vol. 76(4), pages 604-620, September.
    2. Randall Lewis & David Reiley, 2014. "Online ads and offline sales: measuring the effect of retail advertising via a controlled experiment on Yahoo!," Quantitative Marketing and Economics (QME), Springer, vol. 12(3), pages 235-266, September.
    3. Ferraro, Paul J. & Miranda, Juan José, 2014. "The performance of non-experimental designs in the evaluation of environmental programs: A design-replication study using a large-scale randomized experiment as a benchmark," Journal of Economic Behavior & Organization, Elsevier, vol. 107(PA), pages 344-365.
    4. Joseph G. Altonji & Todd E. Elder & Christopher R. Taber, 2005. "Selection on Observed and Unobserved Variables: Assessing the Effectiveness of Catholic Schools," Journal of Political Economy, University of Chicago Press, vol. 113(1), pages 151-184, February.
    5. Guido W. Imbens, 2015. "Matching Methods in Practice: Three Examples," Journal of Human Resources, University of Wisconsin Press, vol. 50(2), pages 373-419.
    6. Guildo W. Imbens, 2003. "Sensitivity to Exogeneity Assumptions in Program Evaluation," American Economic Review, American Economic Association, vol. 93(2), pages 126-132, May.
    7. Navdeep S. Sahni, 2015. "Effect of temporal spacing between advertising exposures: Evidence from online field experiments," Quantitative Marketing and Economics (QME), Springer, vol. 13(3), pages 203-247, September.
    8. Avi Goldfarb & Catherine Tucker, 2011. "Online Display Advertising: Targeting and Obtrusiveness," Marketing Science, INFORMS, vol. 30(3), pages 389-404, 05-06.
    9. Michael J. Cooper & Huseyin Gulen & P. Raghavendra Rau, 2005. "Changing Names with Style: Mutual Fund Name Changes and Their Effects on Fund Flows," Journal of Finance, American Finance Association, vol. 60(6), pages 2825-2858, December.
    10. Navdeep Sahni, 2015. "Erratum to: Effect of temporal spacing between advertising exposures: Evidence from online field experiments," Quantitative Marketing and Economics (QME), Springer, vol. 13(3), pages 249-250, September.
    11. Wooldridge, Jeffrey M., 2007. "Inverse probability weighted estimation for general missing data problems," Journal of Econometrics, Elsevier, vol. 141(2), pages 1281-1301, December.
    12. Guido W. Imbens & Jeffrey M. Wooldridge, 2009. "Recent Developments in the Econometrics of Program Evaluation," Journal of Economic Literature, American Economic Association, vol. 47(1), pages 5-86, March.
    13. Keisuke Hirano & Guido W. Imbens & Geert Ridder, 2003. "Efficient Estimation of Average Treatment Effects Using the Estimated Propensity Score," Econometrica, Econometric Society, vol. 71(4), pages 1161-1189, July.
    14. Alberto Abadie & Guido W. Imbens, 2008. "On the Failure of the Bootstrap for Matching Estimators," Econometrica, Econometric Society, vol. 76(6), pages 1537-1557, November.
    15. Thomas Blake & Chris Nosko & Steven Tadelis, 2015. "Consumer Heterogeneity and Paid Search Effectiveness: A Large‐Scale Field Experiment," Econometrica, Econometric Society, vol. 83, pages 155-174, January.
    16. Paul J. Ferraro & Juan José Miranda, 2017. "Panel Data Designs and Estimators as Substitutes for Randomized Controlled Trials in the Evaluation of Public Programs," Journal of the Association of Environmental and Resource Economists, University of Chicago Press, vol. 4(1), pages 281-317.
    17. Marco Caliendo & Sabine Kopeinig, 2008. "Some Practical Guidance For The Implementation Of Propensity Score Matching," Journal of Economic Surveys, Wiley Blackwell, vol. 22(1), pages 31-72, February.
    18. Rajeev H. Dehejia & Sadek Wahba, 2002. "Propensity Score-Matching Methods For Nonexperimental Causal Studies," The Review of Economics and Statistics, MIT Press, vol. 84(1), pages 151-161, February.
    19. Kirthi Kalyanam & John McAteer & Jonathan Marek & James Hodges & Lifeng Lin, 2018. "Cross channel effects of search engine advertising on brick & mortar retail sales: Meta analysis of large scale field experiments on Google.com," Quantitative Marketing and Economics (QME), Springer, vol. 16(1), pages 1-42, March.
    20. Kosuke Imai & Marc Ratkovic, 2014. "Covariate balancing propensity score," Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 76(1), pages 243-263, January.
    21. Navdeep S. Sahni, 2015. "Erratum to: Effect of temporal spacing between advertising exposures: Evidence from online field experiments," Quantitative Marketing and Economics (QME), Springer, vol. 13(3), pages 249-250, September.
    22. A. Belloni & D. Chen & V. Chernozhukov & C. Hansen, 2012. "Sparse Models and Methods for Optimal Instruments With an Application to Eminent Domain," Econometrica, Econometric Society, vol. 80(6), pages 2369-2429, November.
    23. José R. Zubizarreta, 2015. "Stable Weights that Balance Covariates for Estimation With Incomplete Outcome Data," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 110(511), pages 910-922, September.
    24. Avi Goldfarb & Catherine Tucker, 2011. "Rejoinder--Implications of "Online Display Advertising: Targeting and Obtrusiveness"," Marketing Science, INFORMS, vol. 30(3), pages 413-415, 05-06.
    25. Imbens, Guido W & Angrist, Joshua D, 1994. "Identification and Estimation of Local Average Treatment Effects," Econometrica, Econometric Society, vol. 62(2), pages 467-475, March.
    26. Susan Athey & Guido W. Imbens & Stefan Wager, 2018. "Approximate residual balancing: debiased inference of average treatment effects in high dimensions," Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 80(4), pages 597-623, September.
    27. Thai T. Pham & Yuanyuan Shen, 2017. "A Deep Causal Inference Approach to Measuring the Effects of Forming Group Loans in Online Non-profit Microfinance Platform," Papers 1706.02795, arXiv.org.
    28. Guido W. Imbens, 2004. "Nonparametric Estimation of Average Treatment Effects Under Exogeneity: A Review," The Review of Economics and Statistics, MIT Press, vol. 86(1), pages 4-29, February.
    29. Imbens,Guido W. & Rubin,Donald B., 2015. "Causal Inference for Statistics, Social, and Biomedical Sciences," Cambridge Books, Cambridge University Press, number 9780521885881.
    30. Navdeep Sahni, 2015. "Effect of temporal spacing between advertising exposures: Evidence from online field experiments," Quantitative Marketing and Economics (QME), Springer, vol. 13(3), pages 203-247, September.
    31. José R. Zubizarreta, 2012. "Using Mixed Integer Programming for Matching in an Observational Study of Kidney Failure After Surgery," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 107(500), pages 1360-1371, December.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Weijia Dai & Hyunjin Kim & Michael Luca, 2023. "Frontiers: Which Firms Gain from Digital Advertising? Evidence from a Field Experiment," Marketing Science, INFORMS, vol. 42(3), pages 429-439, May.
    2. Susan Athey & Guido W. Imbens, 2017. "The State of Applied Econometrics: Causality and Policy Evaluation," Journal of Economic Perspectives, American Economic Association, vol. 31(2), pages 3-32, Spring.
    3. Huber, Martin, 2019. "An introduction to flexible methods for policy evaluation," FSES Working Papers 504, Faculty of Economics and Social Sciences, University of Freiburg/Fribourg Switzerland.
    4. Jones A.M & Rice N, 2009. "Econometric Evaluation of Health Policies," Health, Econometrics and Data Group (HEDG) Working Papers 09/09, HEDG, c/o Department of Economics, University of York.
    5. Garrett A. Johnson & Randall A. Lewis & David H. Reiley, 2017. "When Less Is More: Data and Power in Advertising Experiments," Marketing Science, INFORMS, vol. 36(1), pages 43-53, January.
    6. Guido W. Imbens & Jeffrey M. Wooldridge, 2009. "Recent Developments in the Econometrics of Program Evaluation," Journal of Economic Literature, American Economic Association, vol. 47(1), pages 5-86, March.
    7. Kirthi Kalyanam & John McAteer & Jonathan Marek & James Hodges & Lifeng Lin, 2018. "Cross channel effects of search engine advertising on brick & mortar retail sales: Meta analysis of large scale field experiments on Google.com," Quantitative Marketing and Economics (QME), Springer, vol. 16(1), pages 1-42, March.
    8. Guido W. Imbens, 2022. "Causality in Econometrics: Choice vs Chance," Econometrica, Econometric Society, vol. 90(6), pages 2541-2566, November.
    9. Jeffrey Smith & Arthur Sweetman, 2016. "Viewpoint: Estimating the causal effects of policies and programs," Canadian Journal of Economics, Canadian Economics Association, vol. 49(3), pages 871-905, August.
    10. Brett R. Gordon & Robert Moakler & Florian Zettelmeyer, 2023. "Predictive Incrementality by Experimentation (PIE) for Ad Measurement," Papers 2304.06828, arXiv.org.
    11. Ferman, Bruno, 2021. "Matching estimators with few treated and many control observations," Journal of Econometrics, Elsevier, vol. 225(2), pages 295-307.
    12. Huber, Martin & Lechner, Michael & Wunsch, Conny, 2013. "The performance of estimators based on the propensity score," Journal of Econometrics, Elsevier, vol. 175(1), pages 1-21.
    13. Farrell, Max H., 2015. "Robust inference on average treatment effects with possibly more covariates than observations," Journal of Econometrics, Elsevier, vol. 189(1), pages 1-23.
    14. Johannes Hermle & Giorgio Martini, 2022. "Valid and Unobtrusive Measurement of Returns to Advertising through Asymmetric Budget Split," Papers 2207.00206, arXiv.org.
    15. Pedro H. C. Sant'Anna & Xiaojun Song & Qi Xu, 2022. "Covariate distribution balance via propensity scores," Journal of Applied Econometrics, John Wiley & Sons, Ltd., vol. 37(6), pages 1093-1120, September.
    16. Tommaso Nannicini, 2007. "Simulation-based sensitivity analysis for matching estimators," Stata Journal, StataCorp LP, vol. 7(3), pages 334-350, September.
    17. Flores, Carlos A. & Mitnik, Oscar A., 2009. "Evaluating Nonexperimental Estimators for Multiple Treatments: Evidence from Experimental Data," IZA Discussion Papers 4451, Institute of Labor Economics (IZA).
    18. Caliendo, Marco & Mahlstedt, Robert & Mitnik, Oscar A., 2017. "Unobservable, but unimportant? The relevance of usually unobserved variables for the evaluation of labor market policies," Labour Economics, Elsevier, vol. 46(C), pages 14-25.
    19. Matthew A. Masten & Alexandre Poirier & Linqi Zhang, 2024. "Assessing Sensitivity to Unconfoundedness: Estimation and Inference," Journal of Business & Economic Statistics, Taylor & Francis Journals, vol. 42(1), pages 1-13, January.
    20. Ganesh Karapakula, 2023. "Stable Probability Weighting: Large-Sample and Finite-Sample Estimation and Inference Methods for Heterogeneous Causal Effects of Multivalued Treatments Under Limited Overlap," Papers 2301.05703, arXiv.org, revised Jan 2023.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:inm:ormksc:v:38:y:2019:i:2:p:193-225. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Chris Asher (email available below). General contact details of provider: https://edirc.repec.org/data/inforea.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.