IDEAS home Printed from https://ideas.repec.org/p/ifs/cemmap/13-18.html
   My bibliography  Save this paper

On the iterated estimation of dynamic discrete choice games

Author

Listed:
  • Federico A. Bugni

    (Institute for Fiscal Studies and Duke University)

  • Jackson Bunting

    (Institute for Fiscal Studies)

Abstract

We study the asymptotic properties of a class of estimators of the structural parameters in dynamic discrete choice games. We consider K-stage policy iteration (PI) estimators, where K denotes the number of policy iterations employed in the estimation. This class nests several estimators proposed in the literature. By considering a "maximum likelihood" criterion function, our estimator becomes the K- ML estimator in Aguirregabiria and Mira (2002, 2007). By considering a "minimum distance" criterion function, it de nes a new K-MD estimator, which is an iterative version of the estimators in Pesendorfer and Schmidt-Dengler (2008) and Pakes et al. (2007). First, we establish that the K-ML estimator is consistent and asymptotically normal for any K. This complements ndings in Aguirregabiria and Mira (2007), who focus on K = 1 and K large enough to induce convergence of the estimator. Furthermore, we show that the asymptotic variance of the K-ML estimator can exhibit arbitrary patterns as a function K. Second, we establish that the K-MD estimator is consistent and asymptotically normal for any K. For a specifi c weight matrix, the K-MD estimator has the same asymptotic distribution as the K-ML estimator. Our main result provides an optimal sequence of weight matrices for the K-MD estimator and shows that the optimally weighted K-MD estimator has an asymptotic distribution that is invariant to K. This new result is especially unexpected given the findings in Aguirregabiria and Mira (2007) for K-ML estimators. Our main result implies two new and important corollaries about the optimal 1-MD estimator (derived by Pesendorfer and Schmidt-Dengler (2008)). First, the optimal 1-MD estimator is optimal in the class of K-MD estimators for all K. In other words, additional policy iterations do not provide asymptotic efficiency gains relative to the optimal 1-MD estimator. Second, the optimal 1-MD estimator is more or equally asymptotically efficient than any K-ML estimator for all K.

Suggested Citation

  • Federico A. Bugni & Jackson Bunting, 2018. "On the iterated estimation of dynamic discrete choice games," CeMMAP working papers CWP13/18, Centre for Microdata Methods and Practice, Institute for Fiscal Studies.
  • Handle: RePEc:ifs:cemmap:13/18
    as

    Download full text from publisher

    File URL: https://www.ifs.org.uk/uploads/CWP131818.pdf
    Download Restriction: no

    References listed on IDEAS

    as
    1. Victor Aguirregabiria & Pedro Mira, 2002. "Swapping the Nested Fixed Point Algorithm: A Class of Estimators for Discrete Markov Decision Models," Econometrica, Econometric Society, vol. 70(4), pages 1519-1543, July.
    Full references (including those not matched with items on IDEAS)

    More about this item

    Keywords

    dynamic discrete choice problems; dynamic games; pseudo maximum likelihood estimator; minimum distance estimator; estimation; asymptotic efficiency;

    JEL classification:

    • C13 - Mathematical and Quantitative Methods - - Econometric and Statistical Methods and Methodology: General - - - Estimation: General
    • C61 - Mathematical and Quantitative Methods - - Mathematical Methods; Programming Models; Mathematical and Simulation Modeling - - - Optimization Techniques; Programming Models; Dynamic Analysis
    • C73 - Mathematical and Quantitative Methods - - Game Theory and Bargaining Theory - - - Stochastic and Dynamic Games; Evolutionary Games

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:ifs:cemmap:13/18. See general information about how to correct material in RePEc.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Emma Hyman). General contact details of provider: http://edirc.repec.org/data/cmifsuk.html .

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service hosted by the Research Division of the Federal Reserve Bank of St. Louis . RePEc uses bibliographic data supplied by the respective publishers.