IDEAS home Printed from https://ideas.repec.org/p/upf/upfgen/712.html
   My bibliography  Save this paper

Stepwise multiple testing as formalized data snooping

Author

Listed:
  • Joseph P. Romano
  • Michael Wolf

Abstract

It is common in econometric applications that several hypothesis tests are carried out at the same time. The problem then becomes how to decide which hypotheses to reject, accounting for the multitude of tests. In this paper, we suggest a stepwise multiple testing procedure which asymptotically controls the familywise error rate at a desired level. Compared to related single-step methods, our procedure is more powerful in the sense that it often will reject more false hypotheses. In addition, we advocate the use of studentization when it is feasible. Unlike some stepwise methods, our method implicitly captures the joint dependence structure of the test statistics, which results in increased ability to detect alternative hypotheses. We prove our method asymptotically controls the familywise error rate under minimal assumptions. We present our methodology in the context of comparing several strategies to a common benchmark and deciding which strategies actually beat the benchmark. However, our ideas can easily be extended and/or modi ed to other contexts, such as making inference for the individual regression coecients in a multiple regression framework. Some simulation studies show the improvements of our methods over previous proposals. We also provide an application to a set of real data.

Suggested Citation

  • Joseph P. Romano & Michael Wolf, 2003. "Stepwise multiple testing as formalized data snooping," Economics Working Papers 712, Department of Economics and Business, Universitat Pompeu Fabra.
  • Handle: RePEc:upf:upfgen:712
    as

    Download full text from publisher

    File URL: https://econ-papers.upf.edu/papers/712.pdf
    File Function: Whole Paper
    Download Restriction: no
    ---><---

    Other versions of this item:

    References listed on IDEAS

    as
    1. Wolf, Michael & Romano, Joseph P., 2001. "Improved nonparametric confidence intervals in time series regressions," DES - Working Papers. Statistics and Econometrics. WS ws010201, Universidad Carlos III de Madrid. Departamento de Estadística.
    2. Delgado, Miguel A. & Rodriguez-Poo, Juan M. & Wolf, Michael, 2001. "Subsampling inference in cube root asymptotics with an application to Manski's maximum score estimator," Economics Letters, Elsevier, vol. 73(2), pages 241-250, November.
    3. Leamer, Edward E, 1983. "Let's Take the Con Out of Econometrics," American Economic Review, American Economic Association, vol. 73(1), pages 31-43, March.
    4. Andrews, Donald W K, 1991. "Heteroskedasticity and Autocorrelation Consistent Covariance Matrix Estimation," Econometrica, Econometric Society, vol. 59(3), pages 817-858, May.
    5. Lo, Andrew W & MacKinlay, A Craig, 1990. "Data-Snooping Biases in Tests of Financial Asset Pricing Models," Review of Financial Studies, Society for Financial Studies, vol. 3(3), pages 431-467.
    6. Lovell, Michael C, 1983. "Data Mining," The Review of Economics and Statistics, MIT Press, vol. 65(1), pages 1-12, February.
    7. Andrews, Donald W K & Monahan, J Christopher, 1992. "An Improved Heteroskedasticity and Autocorrelation Consistent Covariance Matrix Estimator," Econometrica, Econometric Society, vol. 60(4), pages 953-966, July.
    8. Gonzalo, Jesus & Wolf, Michael, 2005. "Subsampling inference in threshold autoregressive models," Journal of Econometrics, Elsevier, vol. 127(2), pages 201-224, August.
    9. Tae-Hwy Lee & Yong Bao & Burak Saltoglu, 2006. "Evaluating predictive performance of value-at-risk models in emerging markets: a reality check," Journal of Forecasting, John Wiley & Sons, Ltd., vol. 25(2), pages 101-128.
    10. Halbert White, 2000. "A Reality Check for Data Snooping," Econometrica, Econometric Society, vol. 68(5), pages 1097-1126, September.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Clark, Todd & McCracken, Michael, 2013. "Advances in Forecast Evaluation," Handbook of Economic Forecasting, in: G. Elliott & C. Granger & A. Timmermann (ed.), Handbook of Economic Forecasting, edition 1, volume 2, chapter 0, pages 1107-1201, Elsevier.
    2. Castle Jennifer L. & Doornik Jurgen A & Hendry David F., 2011. "Evaluating Automatic Model Selection," Journal of Time Series Econometrics, De Gruyter, vol. 3(1), pages 1-33, February.
    3. Aleksejs Krecetovs & Pasquale Della Corte, 2016. "Macro uncertainty and currency premia," 2016 Meeting Papers 624, Society for Economic Dynamics.
    4. Firmin Doko Tchatoka & Qazi Haque, 2020. "On bootstrapping tests of equal forecast accuracy for nested models," Economics Discussion / Working Papers 20-06, The University of Western Australia, Department of Economics.
    5. Jose Dias Curto & Jose Castro Pinto, 2009. "The coefficient of variation asymptotic distribution in the case of non-iid random variables," Journal of Applied Statistics, Taylor & Francis Journals, vol. 36(1), pages 21-32.
    6. Romano, Joseph P. & Shaikh, Azeem M. & Wolf, Michael, 2008. "Formalized Data Snooping Based On Generalized Error Rates," Econometric Theory, Cambridge University Press, vol. 24(2), pages 404-447, April.
    7. Ledoit, Oliver & Wolf, Michael, 2008. "Robust performance hypothesis testing with the Sharpe ratio," Journal of Empirical Finance, Elsevier, vol. 15(5), pages 850-859, December.
    8. Danilov, D.L. & Magnus, J.R., 2001. "On the Harm that Pretesting Does," Other publications TiSEM f131c709-4db4-468d-9ae8-9, Tilburg University, School of Economics and Management.
    9. Ardia, David & Boudt, Kris, 2018. "The peer performance ratios of hedge funds," Journal of Banking & Finance, Elsevier, vol. 87(C), pages 351-368.
    10. Todd E. Clark, 2004. "Can out-of-sample forecast comparisons help prevent overfitting?," Journal of Forecasting, John Wiley & Sons, Ltd., vol. 23(2), pages 115-139.
    11. Olivier Ledoit & Michael Wolf, 2018. "Robust performance hypothesis testing with smooth functions of population moments," ECON - Working Papers 305, Department of Economics - University of Zurich.
    12. Cheol‐Ho Park & Scott H. Irwin, 2007. "What Do We Know About The Profitability Of Technical Analysis?," Journal of Economic Surveys, Wiley Blackwell, vol. 21(4), pages 786-826, September.
    13. Auer, Benjamin R. & Schuhmacher, Frank, 2013. "Performance hypothesis testing with the Sharpe ratio: The case of hedge funds," Finance Research Letters, Elsevier, vol. 10(4), pages 196-208.
    14. Hsu, Po-Hsuan & Taylor, Mark P. & Wang, Zigan, 2016. "Technical trading: Is it still beating the foreign exchange market?," Journal of International Economics, Elsevier, vol. 102(C), pages 188-208.
    15. Yang, Haisheng & He, Jie & Chen, Shaoling, 2015. "The fragility of the Environmental Kuznets Curve: Revisiting the hypothesis with Chinese data via an “Extreme Bound Analysis”," Ecological Economics, Elsevier, vol. 109(C), pages 41-58.
    16. Matteo Mogliani, 2010. "Residual-based tests for cointegration and multiple deterministic structural breaks: A Monte Carlo study," Working Papers halshs-00564897, HAL.
    17. Marcelo Fernandes & Breno Neri, 2010. "Nonparametric Entropy-Based Tests of Independence Between Stochastic Processes," Econometric Reviews, Taylor & Francis Journals, vol. 29(3), pages 276-306.
    18. Christina Ziegler, 2009. "Testing Predicitive Ability of Business Cycle Indicators for the Euro Area," ifo Working Paper Series 69, ifo Institute - Leibniz Institute for Economic Research at the University of Munich.
    19. Bajgrowicz, Pierre & Scaillet, Olivier, 2012. "Technical trading revisited: False discoveries, persistence tests, and transaction costs," Journal of Financial Economics, Elsevier, vol. 106(3), pages 473-491.
    20. Terrence Hallahan & Robert Faff, 2001. "Induced persistence or reversals in fund performance?: the effect of survivorship bias," Applied Financial Economics, Taylor & Francis Journals, vol. 11(2), pages 119-126.

    More about this item

    Keywords

    Bootstrap; data snooping; familywise error; multiple testing; step-down method;
    All these keywords.

    JEL classification:

    • C12 - Mathematical and Quantitative Methods - - Econometric and Statistical Methods and Methodology: General - - - Hypothesis Testing: General
    • C14 - Mathematical and Quantitative Methods - - Econometric and Statistical Methods and Methodology: General - - - Semiparametric and Nonparametric Methods: General
    • C52 - Mathematical and Quantitative Methods - - Econometric Modeling - - - Model Evaluation, Validation, and Selection

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:upf:upfgen:712. See general information about how to correct material in RePEc.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (). General contact details of provider: http://www.econ.upf.edu/ .

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service hosted by the Research Division of the Federal Reserve Bank of St. Louis . RePEc uses bibliographic data supplied by the respective publishers.