IDEAS home Printed from https://ideas.repec.org/p/bge/wpaper/17.html
   My bibliography  Save this paper

Stepwise Multiple Testing as Formalized Data Snooping

Author

Listed:
  • Joseph P. Romano
  • Michael Wolf

Abstract

It is common in econometric applications that several hypothesis tests are carried out at the same time. The problem then becomes how to decide which hypotheses to reject, accounting for the multitude of tests. In this paper, we suggest a stepwise multiple testing procedure which asymptotically controls the familywise error rate at a desired level. Compared to related single-step methods, our procedure is more powerful in the sense that it often will reject more false hypotheses. In addition, we advocate the use of studentization when it is feasible. Unlike some stepwise methods, our method implicitly captures the joint dependence structure of the test statistics, which results in increased ability to detect alternative hypotheses. We prove our method asymptotically controls the familywise error rate under minimal assumptions. We present our methodology in the context of comparing several strategies to a common benchmark and deciding which strategies actually beat the benchmark. However, our ideas can easily be extended and/or modified to other contexts, such as making inference for the individual regression coefficients in a multiple regression framework. Some simulation studies show the improvements of our methods over previous proposals. We also provide an application to a set of real data.

Suggested Citation

  • Joseph P. Romano & Michael Wolf, 2003. "Stepwise Multiple Testing as Formalized Data Snooping," Working Papers 17, Barcelona School of Economics.
  • Handle: RePEc:bge:wpaper:17
    as

    Download full text from publisher

    File URL: http://www.barcelonagse.eu/sites/default/files/working_paper_pdfs/17.pdf
    Download Restriction: no
    ---><---

    Other versions of this item:

    References listed on IDEAS

    as
    1. Tae-Hwy Lee & Yong Bao & Burak Saltoglu, 2006. "Evaluating predictive performance of value-at-risk models in emerging markets: a reality check," Journal of Forecasting, John Wiley & Sons, Ltd., vol. 25(2), pages 101-128.
    2. Romano, Joseph P. & Wolf, Michael, 2001. "Improved nonparametric confidence intervals in time series regressions," DES - Working Papers. Statistics and Econometrics. WS ws010201, Universidad Carlos III de Madrid. Departamento de Estadística.
    3. Delgado, Miguel A. & Rodriguez-Poo, Juan M. & Wolf, Michael, 2001. "Subsampling inference in cube root asymptotics with an application to Manski's maximum score estimator," Economics Letters, Elsevier, vol. 73(2), pages 241-250, November.
    4. Lo, Andrew W & MacKinlay, A Craig, 1990. "Data-Snooping Biases in Tests of Financial Asset Pricing Models," The Review of Financial Studies, Society for Financial Studies, vol. 3(3), pages 431-467.
    5. Leamer, Edward E, 1983. "Let's Take the Con Out of Econometrics," American Economic Review, American Economic Association, vol. 73(1), pages 31-43, March.
    6. Andrews, Donald W K, 1991. "Heteroskedasticity and Autocorrelation Consistent Covariance Matrix Estimation," Econometrica, Econometric Society, vol. 59(3), pages 817-858, May.
    7. Andrews, Donald W K & Monahan, J Christopher, 1992. "An Improved Heteroskedasticity and Autocorrelation Consistent Covariance Matrix Estimator," Econometrica, Econometric Society, vol. 60(4), pages 953-966, July.
    8. Gonzalo, Jesus & Wolf, Michael, 2005. "Subsampling inference in threshold autoregressive models," Journal of Econometrics, Elsevier, vol. 127(2), pages 201-224, August.
    9. Lovell, Michael C, 1983. "Data Mining," The Review of Economics and Statistics, MIT Press, vol. 65(1), pages 1-12, February.
    10. Halbert White, 2000. "A Reality Check for Data Snooping," Econometrica, Econometric Society, vol. 68(5), pages 1097-1126, September.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Clark, Todd & McCracken, Michael, 2013. "Advances in Forecast Evaluation," Handbook of Economic Forecasting, in: G. Elliott & C. Granger & A. Timmermann (ed.), Handbook of Economic Forecasting, edition 1, volume 2, chapter 0, pages 1107-1201, Elsevier.
    2. Castle Jennifer L. & Doornik Jurgen A & Hendry David F., 2011. "Evaluating Automatic Model Selection," Journal of Time Series Econometrics, De Gruyter, vol. 3(1), pages 1-33, February.
    3. Aleksejs Krecetovs & Pasquale Della Corte, 2016. "Macro uncertainty and currency premia," 2016 Meeting Papers 624, Society for Economic Dynamics.
    4. Marian Vavra, 2015. "On a Bootstrap Test for Forecast Evaluations," Working and Discussion Papers WP 5/2015, Research Department, National Bank of Slovakia.
    5. David R. Bell & Olivier Ledoit & Michael Wolf, 2012. "A new portfolio formation approach to mispricing of marketing performance indicators with an application to customer satisfaction," ECON - Working Papers 079, Department of Economics - University of Zurich, revised Dec 2013.
    6. Firmin Doko Tchatoka & Qazi Haque, 2023. "On bootstrapping tests of equal forecast accuracy for nested models," Journal of Forecasting, John Wiley & Sons, Ltd., vol. 42(7), pages 1844-1864, November.
    7. Jose Dias Curto & Jose Castro Pinto, 2009. "The coefficient of variation asymptotic distribution in the case of non-iid random variables," Journal of Applied Statistics, Taylor & Francis Journals, vol. 36(1), pages 21-32.
    8. Romano, Joseph P. & Shaikh, Azeem M. & Wolf, Michael, 2008. "Formalized Data Snooping Based On Generalized Error Rates," Econometric Theory, Cambridge University Press, vol. 24(2), pages 404-447, April.
    9. Ledoit, Oliver & Wolf, Michael, 2008. "Robust performance hypothesis testing with the Sharpe ratio," Journal of Empirical Finance, Elsevier, vol. 15(5), pages 850-859, December.
    10. Gabriel Frahm & Tobias Wickern & Christof Wiechers, 2012. "Multiple tests for the performance of different investment strategies," AStA Advances in Statistical Analysis, Springer;German Statistical Society, vol. 96(3), pages 343-383, July.
    11. Danilov, D.L. & Magnus, J.R., 2001. "On the Harm that Pretesting Does," Other publications TiSEM f131c709-4db4-468d-9ae8-9, Tilburg University, School of Economics and Management.
    12. Ardia, David & Boudt, Kris, 2018. "The peer performance ratios of hedge funds," Journal of Banking & Finance, Elsevier, vol. 87(C), pages 351-368.
    13. Todd E. Clark, 2004. "Can out-of-sample forecast comparisons help prevent overfitting?," Journal of Forecasting, John Wiley & Sons, Ltd., vol. 23(2), pages 115-139.
    14. Olivier Ledoit & Michael Wolf, 2018. "Robust performance hypothesis testing with smooth functions of population moments," ECON - Working Papers 305, Department of Economics - University of Zurich.
    15. Cheol‐Ho Park & Scott H. Irwin, 2007. "What Do We Know About The Profitability Of Technical Analysis?," Journal of Economic Surveys, Wiley Blackwell, vol. 21(4), pages 786-826, September.
    16. Auer, Benjamin R. & Schuhmacher, Frank, 2013. "Performance hypothesis testing with the Sharpe ratio: The case of hedge funds," Finance Research Letters, Elsevier, vol. 10(4), pages 196-208.
    17. Hsu, Po-Hsuan & Taylor, Mark P. & Wang, Zigan, 2016. "Technical trading: Is it still beating the foreign exchange market?," Journal of International Economics, Elsevier, vol. 102(C), pages 188-208.
    18. Yang, Haisheng & He, Jie & Chen, Shaoling, 2015. "The fragility of the Environmental Kuznets Curve: Revisiting the hypothesis with Chinese data via an “Extreme Bound Analysis”," Ecological Economics, Elsevier, vol. 109(C), pages 41-58.
    19. Matteo Mogliani, 2010. "Residual-based tests for cointegration and multiple deterministic structural breaks: A Monte Carlo study," Working Papers halshs-00564897, HAL.
    20. Marcelo Fernandes & Breno Neri, 2010. "Nonparametric Entropy-Based Tests of Independence Between Stochastic Processes," Econometric Reviews, Taylor & Francis Journals, vol. 29(3), pages 276-306.

    More about this item

    Keywords

    Bootstrap; data snooping; familywise error; multiple testing; step-down method;
    All these keywords.

    JEL classification:

    • C12 - Mathematical and Quantitative Methods - - Econometric and Statistical Methods and Methodology: General - - - Hypothesis Testing: General
    • C14 - Mathematical and Quantitative Methods - - Econometric and Statistical Methods and Methodology: General - - - Semiparametric and Nonparametric Methods: General
    • C52 - Mathematical and Quantitative Methods - - Econometric Modeling - - - Model Evaluation, Validation, and Selection

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:bge:wpaper:17. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Bruno Guallar (email available below). General contact details of provider: https://edirc.repec.org/data/bargses.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.