IDEAS home Printed from https://ideas.repec.org/a/eee/csdana/v56y2012i6p1869-1879.html
   My bibliography  Save this article

Selecting the number of components in principal component analysis using cross-validation approximations

Author

Listed:
  • Josse, Julie
  • Husson, François

Abstract

Cross-validation is a tried and tested approach to select the number of components in principal component analysis (PCA), however, its main drawback is its computational cost. In a regression (or in a non parametric regression) setting, criteria such as the general cross-validation one (GCV) provide convenient approximations to leave-one-out cross-validation. They are based on the relation between the prediction error and the residual sum of squares weighted by elements of a projection matrix (or a smoothing matrix). Such a relation is then established in PCA using an original presentation of PCA with a unique projection matrix. It enables the definition of two cross-validation approximation criteria: the smoothing approximation of the cross-validation criterion (SACV) and the GCV criterion. The method is assessed with simulations and gives promising results.

Suggested Citation

  • Josse, Julie & Husson, François, 2012. "Selecting the number of components in principal component analysis using cross-validation approximations," Computational Statistics & Data Analysis, Elsevier, vol. 56(6), pages 1869-1879.
  • Handle: RePEc:eee:csdana:v:56:y:2012:i:6:p:1869-1879 DOI: 10.1016/j.csda.2011.11.012
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0167947311004099
    Download Restriction: Full text for ScienceDirect subscribers only.

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Chunlei Ke & Yuedong Wang, 2004. "Smoothing Spline Nonlinear Nonparametric Regression Models," Journal of the American Statistical Association, American Statistical Association, vol. 99, pages 1166-1175, December.
    2. Ferre, Louis, 1995. "Selection of components in principal component analysis: A comparison of methods," Computational Statistics & Data Analysis, Elsevier, vol. 19(6), pages 669-682, June.
    3. Peres-Neto, Pedro R. & Jackson, Donald A. & Somers, Keith M., 2005. "How many principal components? stopping rules for determining the number of non-trivial axes revisited," Computational Statistics & Data Analysis, Elsevier, vol. 49(4), pages 974-997, June.
    4. Li, Baibing & Martin, Elaine B. & Morris, A. Julian, 2002. "On principal component analysis in L1," Computational Statistics & Data Analysis, Elsevier, vol. 40(3), pages 471-474, September.
    5. Julie Josse & Jérôme Pagès & François Husson, 2011. "Multiple imputation in principal component analysis," Advances in Data Analysis and Classification, Springer;German Classification Society - Gesellschaft für Klassifikation (GfKl);Japanese Classification Society (JCS);Classification and Data Analysis Group of the Italian Statistical Society (CLADAG);International Federation of Classification Societies (IFCS), vol. 5(3), pages 231-246, October.
    6. Dray, Stephane, 2008. "On the number of principal components: A test of dimensionality based on measurements of similarity between matrices," Computational Statistics & Data Analysis, Elsevier, vol. 52(4), pages 2228-2237, January.
    7. Henk Kiers, 1997. "Weighted least squares fitting using ordinary least squares algorithms," Psychometrika, Springer;The Psychometric Society, vol. 62(2), pages 251-266, June.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Fornaro, Paolo, 2016. "Predicting Finnish economic activity using firm-level data," International Journal of Forecasting, Elsevier, vol. 32(1), pages 10-19.
    2. repec:eee:insuma:v:75:y:2017:i:c:p:151-165 is not listed on IDEAS
    3. Bada, Oualid & Kneip, Alois, 2014. "Parameter cascading for panel models with unknown number of unobserved factors: An application to the credit spread puzzle," Computational Statistics & Data Analysis, Elsevier, vol. 76(C), pages 95-115.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:csdana:v:56:y:2012:i:6:p:1869-1879. See general information about how to correct material in RePEc.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Dana Niculescu). General contact details of provider: http://www.elsevier.com/locate/csda .

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service hosted by the Research Division of the Federal Reserve Bank of St. Louis . RePEc uses bibliographic data supplied by the respective publishers.