IDEAS home Printed from https://ideas.repec.org/a/eee/csdana/v54y2010i3p732-749.html

Learning mixture models via component-wise parameter smoothing

Author

Listed:
  • Reddy, Chandan K.
  • Rajaratnam, Bala

Abstract

The task of obtaining an optimal set of parameters to fit a mixture model has many applications in science and engineering domains and is a computationally challenging problem. A novel algorithm using a convolution based smoothing approach to construct a hierarchy (or family) of smoothed log-likelihood surfaces is proposed. This approach smooths the likelihood function and applies the EM algorithm to obtain a promising solution on the smoothed surface. Using the most promising solutions as initial guesses, the EM algorithm is applied again on the original likelihood. Though the results are demonstrated using only two levels, the method can potentially be applied to any number of levels in the hierarchy. A theoretical insight demonstrates that the smoothing approach indeed reduces the overall gradient of a modified version of the likelihood surface. This optimization procedure effectively eliminates extensive searching in non-promising regions of the parameter space. Results on some benchmark datasets demonstrate significant improvements of the proposed algorithm compared to other approaches. Empirical results on the reduction in the number of local maxima and improvements in the initialization procedures are provided.

Suggested Citation

  • Reddy, Chandan K. & Rajaratnam, Bala, 2010. "Learning mixture models via component-wise parameter smoothing," Computational Statistics & Data Analysis, Elsevier, vol. 54(3), pages 732-749, March.
  • Handle: RePEc:eee:csdana:v:54:y:2010:i:3:p:732-749
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0167-9473(09)00162-5
    Download Restriction: Full text for ScienceDirect subscribers only.
    ---><---

    As the access to this document is restricted, you may want to

    for a different version of it.

    References listed on IDEAS

    as
    1. Hunt, Lynette & Jorgensen, Murray, 2003. "Mixture model clustering for mixed data with missing information," Computational Statistics & Data Analysis, Elsevier, vol. 41(3-4), pages 429-440, January.
    2. Biernacki, Christophe & Celeux, Gilles & Govaert, Gerard, 2003. "Choosing starting values for the EM algorithm for getting the highest likelihood in multivariate Gaussian mixture models," Computational Statistics & Data Analysis, Elsevier, vol. 41(3-4), pages 561-575, January.
    3. Bohning, Dankmar & Seidel, Wilfried & Alfo, Macro & Garel, Bernard & Patilea, Valentin & Walther, Gunther, 2007. "Advances in Mixture Models," Computational Statistics & Data Analysis, Elsevier, vol. 51(11), pages 5205-5210, July.
    4. R. H. Shumway & D. S. Stoffer, 1982. "An Approach To Time Series Smoothing And Forecasting Using The Em Algorithm," Journal of Time Series Analysis, Wiley Blackwell, vol. 3(4), pages 253-264, July.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Franko, Mitja & Nagode, Marko, 2015. "Probability density function of the equivalent stress amplitude using statistical transformation," Reliability Engineering and System Safety, Elsevier, vol. 134(C), pages 118-125.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. O’Hagan, Adrian & Murphy, Thomas Brendan & Gormley, Isobel Claire, 2012. "Computational aspects of fitting mixture models via the expectation–maximization algorithm," Computational Statistics & Data Analysis, Elsevier, vol. 56(12), pages 3843-3864.
    2. Bohning, Dankmar & Seidel, Wilfried, 2003. "Editorial: recent developments in mixture models," Computational Statistics & Data Analysis, Elsevier, vol. 41(3-4), pages 349-357, January.
    3. Allassonnière, Stéphanie & Chevallier, Juliette, 2021. "A new class of stochastic EM algorithms. Escaping local maxima and handling intractable sampling," Computational Statistics & Data Analysis, Elsevier, vol. 159(C).
    4. Riccardo Rastelli & Michael Fop, 2020. "A stochastic block model for interaction lengths," Advances in Data Analysis and Classification, Springer;German Classification Society - Gesellschaft für Klassifikation (GfKl);Japanese Classification Society (JCS);Classification and Data Analysis Group of the Italian Statistical Society (CLADAG);International Federation of Classification Societies (IFCS), vol. 14(2), pages 485-512, June.
    5. Poncela, Pilar & Ruiz Ortega, Esther, 2012. "More is not always better : back to the Kalman filter in dynamic factor models," DES - Working Papers. Statistics and Econometrics. WS ws122317, Universidad Carlos III de Madrid. Departamento de Estadística.
    6. Mazzocchi, Mario, 2006. "Time patterns in UK demand for alcohol and tobacco: an application of the EM algorithm," Computational Statistics & Data Analysis, Elsevier, vol. 50(9), pages 2191-2205, May.
    7. Prajapati, Deepak & Shafiei, Sobhan & Kundu, Debasis & Jamalizadeh, Ahad, 2025. "Geometric scale mixtures of normal distributions," Journal of Multivariate Analysis, Elsevier, vol. 208(C).
    8. repec:plo:pcbi00:1003824 is not listed on IDEAS
    9. Bańbura, Marta & Giannone, Domenico & Lenza, Michele, 2015. "Conditional forecasts and scenario analysis with vector autoregressions for large cross-sections," International Journal of Forecasting, Elsevier, vol. 31(3), pages 739-756.
    10. Adrian O’Hagan & Arthur White, 2019. "Improved model-based clustering performance using Bayesian initialization averaging," Computational Statistics, Springer, vol. 34(1), pages 201-231, March.
    11. Maria Iannario, 2012. "Preliminary estimators for a mixture model of ordinal data," Advances in Data Analysis and Classification, Springer;German Classification Society - Gesellschaft für Klassifikation (GfKl);Japanese Classification Society (JCS);Classification and Data Analysis Group of the Italian Statistical Society (CLADAG);International Federation of Classification Societies (IFCS), vol. 6(3), pages 163-184, October.
    12. Dumas, Bernard & Harvey, Campbell R. & Ruiz, Pierre, 2003. "Are correlations of stock returns justified by subsequent changes in national outputs?," Journal of International Money and Finance, Elsevier, vol. 22(6), pages 777-811, November.
    13. Matteo Barigozzi & Marc Hallin, 2026. "The Dynamic, the Static, and the Weak: Factor Models and the Analysis of High‐Dimensional Time Series," Journal of Time Series Analysis, Wiley Blackwell, vol. 47(1), pages 201-219, January.
    14. Romain Houssa & Lasse Bork & Hans Dewachter, 2008. "Identification of Macroeconomic Factors in Large Panels," Working Papers 1010, University of Namur, Department of Economics.
    15. Zirogiannis, Nikolaos & Tripodis, Yorghos, "undated". "A Generalized Dynamic Factor Model for Panel Data: Estimation with a Two-Cycle Conditional Expectation-Maximization Algorithm," Working Paper Series 142752, University of Massachusetts, Amherst, Department of Resource Economics.
    16. Tobias Hartl & Roland Jucknewitz, 2022. "Approximate state space modelling of unobserved fractional components," Econometric Reviews, Taylor & Francis Journals, vol. 41(1), pages 75-98, January.
    17. Amanda F. Mejia, 2022. "Discussion on “distributional independent component analysis for diverse neuroimaging modalities” by Ben Wu, Subhadip Pal, Jian Kang, and Ying Guo," Biometrics, The International Biometric Society, vol. 78(3), pages 1109-1112, September.
    18. Scott Brave & R. Andrew Butters, 2014. "Nowcasting Using the Chicago Fed National Activity Index," Economic Perspectives, Federal Reserve Bank of Chicago, issue Q I, pages 19-37.
    19. Zhu, Xuwen & Melnykov, Volodymyr, 2018. "Manly transformation in finite mixture modeling," Computational Statistics & Data Analysis, Elsevier, vol. 121(C), pages 190-208.
    20. Lebret, Rémi & Iovleff, Serge & Langrognet, Florent & Biernacki, Christophe & Celeux, Gilles & Govaert, Gérard, 2015. "Rmixmod: The R Package of the Model-Based Unsupervised, Supervised, and Semi-Supervised Classification Mixmod Library," Journal of Statistical Software, Foundation for Open Access Statistics, vol. 67(i06).

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:csdana:v:54:y:2010:i:3:p:732-749. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/locate/csda .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.