Parallel computing techniques
AbstractParallel computing means to divide a job into several tasks and use more than one processor simultaneously to perform these tasks. Assume you have developed a new estimation method for the parameters of a complicated statistical model. After you prove the asymptotic characteristics of the method (for instance, asymptotic distribution of the estimator), you wish to perform many simulations to assure the goodness of the method for reasonable numbers of data values and for different values of parameters. You must generate simulated data, for example, 100 000 times for each length and parameter value. The total simulation work requires a huge number of random number generations and takes a long time on your PC. If you use 100 PCs in your institute to run these simulations simultaneously, you may expect that the total execution time will be 1/100. This is the simple idea of parallel computing. --
Download InfoIf you experience problems downloading a file, check if you have the proper application to view it first. In case of further problems read the IDEAS help page. Note that these files are not on the IDEAS site. Please be patient as the files may be large.
Bibliographic InfoPaper provided by Humboldt-Universität Berlin, Center for Applied Statistics and Economics (CASE) in its series Papers with number 2004,27.
Date of creation: 2004
Date of revision:
Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:
- Christofides, A. & Tanyi, B. & Christofides, S. & Whobrey, D. & Christofides, N., 1999. "The optimal discretization of probability density functions," Computational Statistics & Data Analysis, Elsevier, vol. 31(4), pages 475-486, October.
- Bull, J. M. & Riley, G. D. & Rasbash, J. & Goldstein, H., 1999. "Parallel implementation of a multilevel modelling package," Computational Statistics & Data Analysis, Elsevier, vol. 31(4), pages 457-474, October.
- Jones, H. & Mitra, G. & Parkinson, D. & Spinks, T., 1999. "A parallel implementation of the maximum likelihood method in positron emission tomography image reconstruction," Computational Statistics & Data Analysis, Elsevier, vol. 31(4), pages 417-439, October.
- Murphy, K. & Clint, M. & Perrott, R. H., 1999. "Re-engineering statistical software for efficient parallel execution," Computational Statistics & Data Analysis, Elsevier, vol. 31(4), pages 441-456, October.
- Swann, Christopher A, 2002. "Maximum Likelihood Estimation Using Parallel Computing: An Introduction to MPI," Computational Economics, Society for Computational Economics, vol. 19(2), pages 145-78, April.
- Racine, Jeff, 2002. "Parallel distributed kernel estimation," Computational Statistics & Data Analysis, Elsevier, vol. 40(2), pages 293-302, August.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (ZBW - German National Library of Economics).
If references are entirely missing, you can add them using this form.