Information Theoretic Limits on Learning Stochastic Differential Equations
AbstractConsider the problem of learning the drift coefficient of a stochastic differential equation from a sample path. In this paper, we assume that the drift is parametrized by a high dimensional vector. We address the question of how long the system needs to be observed in order to learn this vector of parameters. We prove a general lower bound on this time complexity by using a characterization of mutual information as time integral of conditional variance, due to Kadota, Zakai, and Ziv. This general lower bound is applied to specific classes of linear and non-linear stochastic differential equations. In the linear case, the problem under consideration is the one of learning a matrix of interaction coefficients. We evaluate our lower bound for ensembles of sparse and dense random matrices. The resulting estimates match the qualitative behavior of upper bounds achieved by computationally efficient procedures.
Download InfoIf you experience problems downloading a file, check if you have the proper application to view it first. In case of further problems read the IDEAS help page. Note that these files are not on the IDEAS site. Please be patient as the files may be large.
Bibliographic InfoPaper provided by arXiv.org in its series Papers with number 1103.1689.
Date of creation: Mar 2011
Date of revision:
Contact details of provider:
Web page: http://arxiv.org/
This paper has been announced in the following NEP Reports:
- NEP-ALL-2011-03-19 (All new papers)
- NEP-ECM-2011-03-19 (Econometrics)
- NEP-ORE-2011-03-19 (Operations Research)
You can help add them by filling out this form.
reading list or among the top items on IDEAS.Access and download statisticsgeneral information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (arXiv administrators).
If references are entirely missing, you can add them using this form.