IDEAS home Printed from https://ideas.repec.org/a/eee/csdana/v189y2024ics0167947323001469.html
   My bibliography  Save this article

Subsampling spectral clustering for stochastic block models in large-scale networks

Author

Listed:
  • Deng, Jiayi
  • Huang, Danyang
  • Ding, Yi
  • Zhu, Yingqiu
  • Jing, Bingyi
  • Zhang, Bo

Abstract

The rapid development of science and technology has generated large amounts of network data, leading to significant computational challenges for network community detection. A novel subsampling spectral clustering algorithm is proposed to address this issue, which aims to identify community structures in large-scale networks with limited computing resources. The algorithm constructs a subnetwork by simple random subsampling from the entire network, and then extends the existing spectral clustering to the subnetwork to estimate the community labels for entire network nodes. As a result, for large-scale datasets, the method can be realized even using a personal computer. Moreover, the proposed method can be generalized in a parallel way. Theoretically, under the stochastic block model and its extension, the degree-corrected stochastic block model, the theoretical properties of the subsampling spectral clustering method are correspondingly established. Finally, to illustrate and evaluate the proposed method, a number of simulation studies and two real data analyses are conducted.

Suggested Citation

  • Deng, Jiayi & Huang, Danyang & Ding, Yi & Zhu, Yingqiu & Jing, Bingyi & Zhang, Bo, 2024. "Subsampling spectral clustering for stochastic block models in large-scale networks," Computational Statistics & Data Analysis, Elsevier, vol. 189(C).
  • Handle: RePEc:eee:csdana:v:189:y:2024:i:c:s0167947323001469
    DOI: 10.1016/j.csda.2023.107835
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0167947323001469
    Download Restriction: Full text for ScienceDirect subscribers only.

    File URL: https://libkey.io/10.1016/j.csda.2023.107835?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Jianwei Hu & Hong Qin & Ting Yan & Yunpeng Zhao, 2020. "Corrected Bayesian Information Criterion for Stochastic Block Models," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 115(532), pages 1771-1783, December.
    2. Jun Yu & HaiYing Wang & Mingyao Ai & Huiming Zhang, 2022. "Optimal Distributed Subsampling for Maximum Quasi-Likelihood Estimators With Massive Data," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 117(537), pages 265-276, January.
    3. Tianxi Li & Elizaveta Levina & Ji Zhu, 2020. "Network cross-validation by edge sampling," Biometrika, Biometrika Trust, vol. 107(2), pages 257-276.
    4. N. Binkiewicz & J. T. Vogelstein & K. Rohe, 2017. "Covariate-assisted spectral clustering," Biometrika, Biometrika Trust, vol. 104(2), pages 361-377.
    5. Haiying Wang & Yanyuan Ma, 2021. "Optimal subsampling for quantile regression in big data," Biometrika, Biometrika Trust, vol. 108(1), pages 99-112.
    6. HaiYing Wang & Min Yang & John Stufken, 2019. "Information-Based Optimal Subdata Selection for Big Data Linear Regression," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 114(525), pages 393-405, January.
    7. HaiYing Wang & Rong Zhu & Ping Ma, 2018. "Optimal Subsampling for Large Sample Logistic Regression," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 113(522), pages 829-844, April.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Jun Yu & Jiaqi Liu & HaiYing Wang, 2023. "Information-based optimal subdata selection for non-linear models," Statistical Papers, Springer, vol. 64(4), pages 1069-1093, August.
    2. Duarte, Belmiro P.M. & Atkinson, Anthony C. & Oliveira, Nuno M.C., 2024. "Using hierarchical information-theoretic criteria to optimize subsampling of extensive datasets," LSE Research Online Documents on Economics 121641, London School of Economics and Political Science, LSE Library.
    3. Tianzhen Wang & Haixiang Zhang, 2022. "Optimal subsampling for multiplicative regression with massive data," Statistica Neerlandica, Netherlands Society for Statistics and Operations Research, vol. 76(4), pages 418-449, November.
    4. Ziyang Wang & HaiYing Wang & Nalini Ravishanker, 2023. "Subsampling in Longitudinal Models," Methodology and Computing in Applied Probability, Springer, vol. 25(1), pages 1-29, March.
    5. Xiaohui Yuan & Yong Li & Xiaogang Dong & Tianqing Liu, 2022. "Optimal subsampling for composite quantile regression in big data," Statistical Papers, Springer, vol. 63(5), pages 1649-1676, October.
    6. Feifei Wang & Danyang Huang & Tianchen Gao & Shuyuan Wu & Hansheng Wang, 2022. "Sequential one‐step estimator by sub‐sampling for customer churn analysis with massive data sets," Journal of the Royal Statistical Society Series C, Royal Statistical Society, vol. 71(5), pages 1753-1786, November.
    7. Su, Miaomiao & Wang, Ruoyu & Wang, Qihua, 2022. "A two-stage optimal subsampling estimation for missing data problems with large-scale data," Computational Statistics & Data Analysis, Elsevier, vol. 173(C).
    8. Jun Yu & HaiYing Wang, 2022. "Subdata selection algorithm for linear model discrimination," Statistical Papers, Springer, vol. 63(6), pages 1883-1906, December.
    9. J. Lars Kirkby & Dang H. Nguyen & Duy Nguyen & Nhu N. Nguyen, 2022. "Inversion-free subsampling Newton’s method for large sample logistic regression," Statistical Papers, Springer, vol. 63(3), pages 943-963, June.
    10. Li Guo & Wolfgang Karl Hardle & Yubo Tao, 2018. "A Time-Varying Network for Cryptocurrencies," Papers 1802.03708, arXiv.org, revised Nov 2022.
    11. Amalan Mahendran & Helen Thompson & James M. McGree, 2023. "A model robust subsampling approach for Generalised Linear Models in big data settings," Statistical Papers, Springer, vol. 64(4), pages 1137-1157, August.
    12. Zhang, Haixiang & Wang, HaiYing, 2021. "Distributed subdata selection for big data via sampling-based approach," Computational Statistics & Data Analysis, Elsevier, vol. 153(C).
    13. Hector, Emily C. & Luo, Lan & Song, Peter X.-K., 2023. "Parallel-and-stream accelerator for computationally fast supervised learning," Computational Statistics & Data Analysis, Elsevier, vol. 177(C).
    14. Sokbae Lee & Serena Ng, 2020. "An Econometric Perspective on Algorithmic Subsampling," Annual Review of Economics, Annual Reviews, vol. 12(1), pages 45-80, August.
    15. Yujing Shao & Lei Wang, 2022. "Optimal subsampling for composite quantile regression model in massive data," Statistical Papers, Springer, vol. 63(4), pages 1139-1161, August.
    16. Lee, JooChul & Wang, HaiYing & Schifano, Elizabeth D., 2020. "Online updating method to correct for measurement error in big data streams," Computational Statistics & Data Analysis, Elsevier, vol. 149(C).
    17. Seong‐H. Lee & Yanyuan Ma & Ying Wei & Jinbo Chen, 2023. "Optimal sampling for positive only electronic health record data," Biometrics, The International Biometric Society, vol. 79(4), pages 2974-2986, December.
    18. Lulu Zuo & Haixiang Zhang & HaiYing Wang & Liuquan Sun, 2021. "Optimal subsample selection for massive logistic regression with distributed data," Computational Statistics, Springer, vol. 36(4), pages 2535-2562, December.
    19. Yuan, Quan & Liu, Binghui, 2021. "Community detection via an efficient nonconvex optimization approach based on modularity," Computational Statistics & Data Analysis, Elsevier, vol. 157(C).
    20. Heather Mathews & Alexander Volfovsky, 2023. "Community informed experimental design," Statistical Methods & Applications, Springer;Società Italiana di Statistica, vol. 32(4), pages 1141-1166, October.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:csdana:v:189:y:2024:i:c:s0167947323001469. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/locate/csda .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.