IDEAS home Printed from https://ideas.repec.org/a/eee/csdana/v114y2017icp12-25.html
   My bibliography  Save this article

High dimensional covariance matrix estimation by penalizing the matrix-logarithm transformed likelihood

Author

Listed:
  • Yu, Philip L.H.
  • Wang, Xiaohang
  • Zhu, Yuanyuan

Abstract

It is well known that when the dimension of the data becomes very large, the sample covariance matrix S will not be a good estimator of the population covariance matrix Σ. Using such estimator, one typical consequence is that the estimated eigenvalues from S will be distorted. Many existing methods tried to solve the problem, and examples of which include regularizing Σ by thresholding or banding. In this paper, we estimate Σ by maximizing the likelihood using a new penalization on the matrix logarithm of Σ (denoted by A) of the form: ‖A−mI‖F2=∑i(log(di)−m)2, where di is the ith eigenvalue of Σ. This penalty aims at shrinking the estimated eigenvalues of A toward the mean eigenvalue m. The merits of our method are that it guarantees Σ to be non-negative definite and is computational efficient. The simulation study and applications on portfolio optimization and classification of genomic data show that the proposed method outperforms existing methods.

Suggested Citation

  • Yu, Philip L.H. & Wang, Xiaohang & Zhu, Yuanyuan, 2017. "High dimensional covariance matrix estimation by penalizing the matrix-logarithm transformed likelihood," Computational Statistics & Data Analysis, Elsevier, vol. 114(C), pages 12-25.
  • Handle: RePEc:eee:csdana:v:114:y:2017:i:c:p:12-25
    DOI: 10.1016/j.csda.2017.04.004
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0167947317300774
    Download Restriction: Full text for ScienceDirect subscribers only.

    File URL: https://libkey.io/10.1016/j.csda.2017.04.004?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Ledoit, Olivier & Wolf, Michael, 2004. "A well-conditioned estimator for large-dimensional covariance matrices," Journal of Multivariate Analysis, Elsevier, vol. 88(2), pages 365-411, February.
    2. Harry Markowitz, 1952. "Portfolio Selection," Journal of Finance, American Finance Association, vol. 7(1), pages 77-91, March.
    3. Adam J. Rothman, 2012. "Positive definite estimators of large covariance matrices," Biometrika, Biometrika Trust, vol. 99(3), pages 733-740.
    4. Jonsson, Dag, 1982. "Some limit theorems for the eigenvalues of a sample covariance matrix," Journal of Multivariate Analysis, Elsevier, vol. 12(1), pages 1-38, March.
    5. Ming Yuan & Yi Lin, 2007. "Model selection and estimation in the Gaussian graphical model," Biometrika, Biometrika Trust, vol. 94(1), pages 19-35.
    6. Lingzhou Xue & Shiqian Ma & Hui Zou, 2012. "Positive-Definite ℓ 1 -Penalized Estimation of Large Covariance Matrices," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 107(500), pages 1480-1491, December.
    7. Furrer, Reinhard & Bengtsson, Thomas, 2007. "Estimation of high-dimensional prior and posterior covariance matrices in Kalman filter variants," Journal of Multivariate Analysis, Elsevier, vol. 98(2), pages 227-255, February.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Javier Ibáñez & Jorge Sastre & Pedro Ruiz & José M. Alonso & Emilio Defez, 2021. "An Improved Taylor Algorithm for Computing the Matrix Logarithm," Mathematics, MDPI, vol. 9(17), pages 1-19, August.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Avagyan, Vahe & Alonso Fernández, Andrés Modesto & Nogales, Francisco J., 2015. "D-trace Precision Matrix Estimation Using Adaptive Lasso Penalties," DES - Working Papers. Statistics and Econometrics. WS 21775, Universidad Carlos III de Madrid. Departamento de Estadística.
    2. Yan Zhang & Jiyuan Tao & Zhixiang Yin & Guoqiang Wang, 2022. "Improved Large Covariance Matrix Estimation Based on Efficient Convex Combination and Its Application in Portfolio Optimization," Mathematics, MDPI, vol. 10(22), pages 1-15, November.
    3. Arnab Chakrabarti & Rituparna Sen, 2018. "Some Statistical Problems with High Dimensional Financial data," Papers 1808.02953, arXiv.org.
    4. Bailey, Natalia & Pesaran, M. Hashem & Smith, L. Vanessa, 2019. "A multiple testing approach to the regularisation of large sample correlation matrices," Journal of Econometrics, Elsevier, vol. 208(2), pages 507-534.
    5. Vahe Avagyan & Andrés M. Alonso & Francisco J. Nogales, 2018. "D-trace estimation of a precision matrix using adaptive Lasso penalties," Advances in Data Analysis and Classification, Springer;German Classification Society - Gesellschaft für Klassifikation (GfKl);Japanese Classification Society (JCS);Classification and Data Analysis Group of the Italian Statistical Society (CLADAG);International Federation of Classification Societies (IFCS), vol. 12(2), pages 425-447, June.
    6. Lam, Clifford, 2020. "High-dimensional covariance matrix estimation," LSE Research Online Documents on Economics 101667, London School of Economics and Political Science, LSE Library.
    7. Gautam Sabnis & Debdeep Pati & Anirban Bhattacharya, 2019. "Compressed Covariance Estimation with Automated Dimension Learning," Sankhya A: The Indian Journal of Statistics, Springer;Indian Statistical Institute, vol. 81(2), pages 466-481, December.
    8. Xiaoping Zhou & Dmitry Malioutov & Frank J. Fabozzi & Svetlozar T. Rachev, 2014. "Smooth monotone covariance for elliptical distributions and applications in finance," Quantitative Finance, Taylor & Francis Journals, vol. 14(9), pages 1555-1571, September.
    9. Joo, Young C. & Park, Sung Y., 2021. "Optimal portfolio selection using a simple double-shrinkage selection rule," Finance Research Letters, Elsevier, vol. 43(C).
    10. Azam Kheyri & Andriette Bekker & Mohammad Arashi, 2022. "High-Dimensional Precision Matrix Estimation through GSOS with Application in the Foreign Exchange Market," Mathematics, MDPI, vol. 10(22), pages 1-19, November.
    11. Ding, Wenliang & Shu, Lianjie & Gu, Xinhua, 2023. "A robust Glasso approach to portfolio selection in high dimensions," Journal of Empirical Finance, Elsevier, vol. 70(C), pages 22-37.
    12. Ziqi Chen & Chenlei Leng, 2016. "Dynamic Covariance Models," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 111(515), pages 1196-1207, July.
    13. Guo, Wenxing & Balakrishnan, Narayanaswamy & He, Mu, 2023. "Envelope-based sparse reduced-rank regression for multivariate linear model," Journal of Multivariate Analysis, Elsevier, vol. 195(C).
    14. Choi, Young-Geun & Lim, Johan & Roy, Anindya & Park, Junyong, 2019. "Fixed support positive-definite modification of covariance matrix estimators via linear shrinkage," Journal of Multivariate Analysis, Elsevier, vol. 171(C), pages 234-249.
    15. Ding, Yi & Li, Yingying & Zheng, Xinghua, 2021. "High dimensional minimum variance portfolio estimation under statistical factor models," Journal of Econometrics, Elsevier, vol. 222(1), pages 502-515.
    16. Li, Peili & Xiao, Yunhai, 2018. "An efficient algorithm for sparse inverse covariance matrix estimation based on dual formulation," Computational Statistics & Data Analysis, Elsevier, vol. 128(C), pages 292-307.
    17. Lam, Clifford, 2008. "Estimation of large precision matrices through block penalization," LSE Research Online Documents on Economics 31543, London School of Economics and Political Science, LSE Library.
    18. Füss, Roland & Miebs, Felix & Trübenbach, Fabian, 2014. "A jackknife-type estimator for portfolio revision," Journal of Banking & Finance, Elsevier, vol. 43(C), pages 14-28.
    19. Lassance, Nathan & Vrins, Frédéric, 2021. "Portfolio selection with parsimonious higher comoments estimation," Journal of Banking & Finance, Elsevier, vol. 126(C).
    20. Kashlak, Adam B., 2021. "Non-asymptotic error controlled sparse high dimensional precision matrix estimation," Journal of Multivariate Analysis, Elsevier, vol. 181(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:csdana:v:114:y:2017:i:c:p:12-25. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/locate/csda .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.