Weak Dynamic Programming for Generalized State Constraints
We provide a dynamic programming principle for stochastic optimal control problems with expectation constraints. A weak formulation, using test functions and a probabilistic relaxation of the constraint, avoids restrictions related to a measurable selection but still implies the Hamilton-Jacobi-Bellman equation in the viscosity sense. We treat open state constraints as a special case of expectation constraints and prove a comparison theorem to obtain the equation for closed state constraints.
|Date of creation:||May 2011|
|Date of revision:||Oct 2012|
|Publication status:||Published in SIAM Journal on Control and Optimization, Vol. 50, No. 6, pp. 3344-3373, 2012|
|Contact details of provider:|| Web page: http://arxiv.org/|
When requesting a correction, please mention this item's handle: RePEc:arx:papers:1105.0745. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (arXiv administrators)
If references are entirely missing, you can add them using this form.