Analytic Derivatives for Estimation of Linear Dynamic Models
This paper develops two algorithms. Algorithm I computes the exact, Gaussian, log-likelihood function, its exact, gradient vector, and an asymptotic approximation of its Hessian matrix, for discrete-time, linear, dynamic models in state-space form. Algorithm 2, derived from algorithm I, computes the exact, sample, information matrix of this likelihood function. The computed quantities are analytic (not numerical approximations) and should, therefore, be useful for reliably, quickly, and accurately: (i) checking local identifiability of parameters by checking the rank of the information matrix; (ii) using the gradient vector and Hessian matrix to compute maximum likelihood estimates of parameters with Newton methods; and, (iii) computing asymptotic covariances (Cramer-Rao bounds) of the parameter estimates with the Hessian or the information matrix. The principal contribution of the paper is algorithm 2, which extends to multivariate models the univariate results of Porat and Friedlander (1986). By relying on the Kalman filter instead of the Levinson-Durbin filter used by Porat and Friedlander, algorithms 1 and 2 can automatically handle any pattern of missing or linearly aggregated data. Although algorithm 1 is well known, it is treated in detail in order to make the paper self contained.
|Date of creation:||Nov 1988|
|Contact details of provider:|| Postal: 4600 Silver Hill Road, Washington, DC 20233|
Phone: (301) 763-6460
Fax: (301) 763-5935
Web page: http://www.census.gov/ces
More information through EDIRC
When requesting a correction, please mention this item's handle: RePEc:cen:wpaper:88-5. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Erica Coates)
If references are entirely missing, you can add them using this form.