Some Applications of Linear Programming Formulations in Stochastic Control

Some Applications of Linear Programming Formulations in Stochastic Control

0.00 Avg rating0 Votes
Article ID: iaor20127258
Volume: 155
Issue: 2
Start Page Number: 572
End Page Number: 593
Publication Date: Nov 2012
Journal: Journal of Optimization Theory and Applications
Authors: ,
Keywords: control, programming: linear
Abstract:

We present two applications of the linearization techniques in stochastic optimal control. In the first part, we show how the assumption of stability under concatenation for control processes can be dropped in the study of asymptotic stability domains. Generalizing Zubov’s method, the stability domain is then characterized as some level set of a semicontinuous generalized viscosity solution of the associated Hamilton–Jacobi–Bellman equation. In the second part, we extend our study to unbounded coefficients and apply the method to obtain a linear formulation for control problems whenever the state equation is a stochastic variational inequality.

Reviews

Required fields are marked *. Your email address will not be published.