Weak Dynamic Programming for Generalized State Constraints
AbstractWe provide a dynamic programming principle for stochastic optimal control problems with expectation constraints. A weak formulation, using test functions and a probabilistic relaxation of the constraint, avoids restrictions related to a measurable selection but still implies the Hamilton-Jacobi-Bellman equation in the viscosity sense. We treat open state constraints as a special case of expectation constraints and prove a comparison theorem to obtain the equation for closed state constraints.
Download InfoIf you experience problems downloading a file, check if you have the proper application to view it first. In case of further problems read the IDEAS help page. Note that these files are not on the IDEAS site. Please be patient as the files may be large.
Bibliographic InfoPaper provided by arXiv.org in its series Papers with number 1105.0745.
Date of creation: May 2011
Date of revision: Oct 2012
Publication status: Published in SIAM Journal on Control and Optimization, Vol. 50, No. 6, pp. 3344-3373, 2012
Contact details of provider:
Web page: http://arxiv.org/
This paper has been announced in the following NEP Reports:
You can help add them by filling out this form.
CitEc Project, subscribe to its RSS feed for this item.
- Bruno Bouchard & Ludovic Moreau & Marcel Nutz, 2012. "Stochastic Target Games with Controlled Loss," Papers 1206.6325, arXiv.org, revised May 2013.
- Bruno Bouchard & Ludovic Moreau & Mete H. Soner, 2013. "Hedging under an expected loss constraint with small transaction costs," Papers 1309.4916, arXiv.org.
- Gordan Zitkovic, 2013. "Dynamic Programming for controlled Markov families: abstractly and over Martingale Measures," Papers 1307.5163, arXiv.org.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (arXiv administrators).
If references are entirely missing, you can add them using this form.