A Monte Carlo study on two methods of calculating the MLEs covariance matrix in a seemingly unrelated nonlinear regression
Econometric techniques to estimate output supply systems, factor demand systems and consumer demand systems have often required estimating a nonlinear system of equations that have an additive error structure when written in reduced form. To calculate the ML estimate's covariance matrix of this nonlinear system one can either invert the Hessian of the concentrated log likelihood function, or invert the matrix calculated by pre-multiplying and post-multiplying the inverted MLE of the disturbance covariance matrix by the Jacobian of the reduced form model. Malinvaud has shown that the latter of these methods is the actual limiting distribution's covariance matrix, while Barnett has shown that the former is only an approximation. In this paper, we use a Monte Carlo simulation study to determine how these two covariance matrices differ with respect to the nonlinearity of the model, the number observations in the data set, and the residual process. We find that the covariance matrix calculated from the Hessian of the concentrated likelihood function produces Wald statistics that are distributed above those calculated with the other covariance matrix. This difference become insignificant as the sample size increases to one-hundred or more observations, suggesting that the asymptotics of the two covariance matrices are quickly reached.
|Date of creation:||1995|
|Date of revision:|
|Contact details of provider:|| Postal: Ludwigstraße 33, D-80539 Munich, Germany|
Web page: https://mpra.ub.uni-muenchen.de
More information through EDIRC
When requesting a correction, please mention this item's handle: RePEc:pra:mprapa:39020. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Joachim Winter)
If references are entirely missing, you can add them using this form.