IDEAS home Printed from https://ideas.repec.org/a/eee/apmaco/v357y2019icp119-138.html
   My bibliography  Save this article

Double fused Lasso penalized LAD for matrix regression

Author

Listed:
  • Li, Mei
  • Kong, Lingchen

Abstract

More complex data are generated with a response on vector and matrix predictors in statistics and machine learning. Recently, Zhou and Li (2014) proposed matrix regression based on least squares (LS) method but they mainly considered the regularized matrix regression with nuclear norm penalty when the distribution of noise is with mean 0 and covariance being fixed. In practice, noises may be heavy-tailed or the distribution is unknown. In this case, it is well known that least absolute deviation (LAD) method yields better performances than LS method. Considering structures of predictors, we propose the double fused Lasso penalized LAD for matrix regression in this paper. The new penalty term combines fused Lasso and matrix-type fused Lasso. We achieve the strong duality theorem between the double fused Lasso penalized LAD and its dual. Based on it, we design a highly scalable symmetric Gauss–Seidel based Alternating Direction Method of Multipliers (sGS-ADMM) algorithm to solve the dual problem. Moreover, we give the global convergence and Q-linear rate of convergence. Finally, effectiveness of our method is demonstrated by numerical experiments on simulation and real datasets.

Suggested Citation

  • Li, Mei & Kong, Lingchen, 2019. "Double fused Lasso penalized LAD for matrix regression," Applied Mathematics and Computation, Elsevier, vol. 357(C), pages 119-138.
  • Handle: RePEc:eee:apmaco:v:357:y:2019:i:c:p:119-138
    DOI: 10.1016/j.amc.2019.03.051
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0096300319302590
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.amc.2019.03.051?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Deren Han & Defeng Sun & Liwei Zhang, 2018. "Linear Rate Convergence of the Alternating Direction Method of Multipliers for Convex Composite Programming," Mathematics of Operations Research, INFORMS, vol. 43(2), pages 622-637, May.
    2. Hua Zhou & Lexin Li, 2014. "Regularized matrix regression," Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 76(2), pages 463-483, March.
    3. Wang, Lie, 2013. "The L1 penalized LAD estimator for high dimensional linear regression," Journal of Multivariate Analysis, Elsevier, vol. 120(C), pages 135-151.
    4. Robert Tibshirani & Michael Saunders & Saharon Rosset & Ji Zhu & Keith Knight, 2005. "Sparsity and smoothness via the fused lasso," Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 67(1), pages 91-108, February.
    5. Wang, Hansheng & Li, Guodong & Jiang, Guohua, 2007. "Robust Regression Shrinkage and Consistent Variable Selection Through the LAD-Lasso," Journal of Business & Economic Statistics, American Statistical Association, vol. 25, pages 347-355, July.
    6. Ming Yuan & Ali Ekici & Zhaosong Lu & Renato Monteiro, 2007. "Dimension reduction and coefficient estimation in multivariate linear regression," Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 69(3), pages 329-346, June.
    7. Hui Zou & Trevor Hastie, 2005. "Addendum: Regularization and variable selection via the elastic net," Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 67(5), pages 768-768, November.
    8. Hui Zou & Trevor Hastie, 2005. "Regularization and variable selection via the elastic net," Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 67(2), pages 301-320, April.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Chen, Huangyue & Kong, Lingchen & Shang, Pan & Pan, Shanshan, 2020. "Safe feature screening rules for the regularized Huber regression," Applied Mathematics and Computation, Elsevier, vol. 386(C).

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Xiaofei Wu & Rongmei Liang & Hu Yang, 2022. "Penalized and constrained LAD estimation in fixed and high dimension," Statistical Papers, Springer, vol. 63(1), pages 53-95, February.
    2. Umberto Amato & Anestis Antoniadis & Italia De Feis & Irene Gijbels, 2021. "Penalised robust estimators for sparse and high-dimensional linear models," Statistical Methods & Applications, Springer;Società Italiana di Statistica, vol. 30(1), pages 1-48, March.
    3. Florian Ziel, 2015. "Iteratively reweighted adaptive lasso for conditional heteroscedastic time series with applications to AR-ARCH type processes," Papers 1502.06557, arXiv.org, revised Dec 2015.
    4. Ziel, Florian, 2016. "Iteratively reweighted adaptive lasso for conditional heteroscedastic time series with applications to AR–ARCH type processes," Computational Statistics & Data Analysis, Elsevier, vol. 100(C), pages 773-793.
    5. Diego Vidaurre & Concha Bielza & Pedro Larrañaga, 2013. "A Survey of L1 Regression," International Statistical Review, International Statistical Institute, vol. 81(3), pages 361-387, December.
    6. Tutz, Gerhard & Pößnecker, Wolfgang & Uhlmann, Lorenz, 2015. "Variable selection in general multinomial logit models," Computational Statistics & Data Analysis, Elsevier, vol. 82(C), pages 207-222.
    7. Mkhadri, Abdallah & Ouhourane, Mohamed, 2013. "An extended variable inclusion and shrinkage algorithm for correlated variables," Computational Statistics & Data Analysis, Elsevier, vol. 57(1), pages 631-644.
    8. Huang, Lele & Zhao, Junlong & Wang, Huiwen & Wang, Siyang, 2016. "Robust shrinkage estimation and selection for functional multiple linear model through LAD loss," Computational Statistics & Data Analysis, Elsevier, vol. 103(C), pages 384-400.
    9. Tomáš Plíhal, 2021. "Scheduled macroeconomic news announcements and Forex volatility forecasting," Journal of Forecasting, John Wiley & Sons, Ltd., vol. 40(8), pages 1379-1397, December.
    10. Loann David Denis Desboulets, 2018. "A Review on Variable Selection in Regression Analysis," Econometrics, MDPI, vol. 6(4), pages 1-27, November.
    11. Wentao Qu & Xianchao Xiu & Huangyue Chen & Lingchen Kong, 2023. "A Survey on High-Dimensional Subspace Clustering," Mathematics, MDPI, vol. 11(2), pages 1-39, January.
    12. Victor Chernozhukov & Christian Hansen & Yuan Liao, 2015. "A lava attack on the recovery of sums of dense and sparse signals," CeMMAP working papers CWP56/15, Centre for Microdata Methods and Practice, Institute for Fiscal Studies.
    13. Takumi Saegusa & Tianzhou Ma & Gang Li & Ying Qing Chen & Mei-Ling Ting Lee, 2020. "Variable Selection in Threshold Regression Model with Applications to HIV Drug Adherence Data," Statistics in Biosciences, Springer;International Chinese Statistical Association, vol. 12(3), pages 376-398, December.
    14. Lee Kyu Ha & Chakraborty Sounak & Sun Jianguo, 2011. "Bayesian Variable Selection in Semiparametric Proportional Hazards Model for High Dimensional Survival Data," The International Journal of Biostatistics, De Gruyter, vol. 7(1), pages 1-32, April.
    15. Sophie Lambert-Lacroix & Laurent Zwald, 2016. "The adaptive BerHu penalty in robust regression," Journal of Nonparametric Statistics, Taylor & Francis Journals, vol. 28(3), pages 487-514, September.
    16. Pei Wang & Shunjie Chen & Sijia Yang, 2022. "Recent Advances on Penalized Regression Models for Biological Data," Mathematics, MDPI, vol. 10(19), pages 1-24, October.
    17. T. Cai & J. Huang & L. Tian, 2009. "Regularized Estimation for the Accelerated Failure Time Model," Biometrics, The International Biometric Society, vol. 65(2), pages 394-404, June.
    18. Nott, David J., 2008. "Predictive performance of Dirichlet process shrinkage methods in linear regression," Computational Statistics & Data Analysis, Elsevier, vol. 52(7), pages 3658-3669, March.
    19. Korobilis, Dimitris, 2013. "Hierarchical shrinkage priors for dynamic regressions with many predictors," International Journal of Forecasting, Elsevier, vol. 29(1), pages 43-59.
    20. Kenneth Lange & Eric C. Chi & Hua Zhou, 2014. "A Brief Survey of Modern Optimization for Statisticians," International Statistical Review, International Statistical Institute, vol. 82(1), pages 46-70, April.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:apmaco:v:357:y:2019:i:c:p:119-138. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: https://www.journals.elsevier.com/applied-mathematics-and-computation .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.