Approximation of optimal feedback control: a dynamic programming approach

Approximation of optimal feedback control: a dynamic programming approach

0.00 Avg rating0 Votes
Article ID: iaor20101567
Volume: 46
Issue: 3
Start Page Number: 395
End Page Number: 422
Publication Date: Mar 2010
Journal: Journal of Global Optimization
Authors: ,
Keywords: programming: dynamic
Abstract:

We consider the general continuous time finite-dimensional deterministic system under a finite horizon cost functional. Our aim is to calculate approximate solutions to the optimal feedback control. First we apply the dynamic programming principle to obtain the evolutive Hamilton–Jacobi–Bellman (HJB) equation satisfied by the value function of the optimal control problem. We then propose two schemes to solve the equation numerically. One is in terms of the time difference approximation and the other the time-space approximation. For each scheme, we prove that (a) the algorithm is convergent, that is, the solution of the discrete scheme converges to the viscosity solution of the HJB equation, and (b) the optimal control of the discrete system determined by the corresponding dynamic programming is a minimizing sequence of the optimal feedback control of the continuous counterpart. An example is presented for the time-space algorithm; the results illustrate that the scheme is effective.

Reviews

Required fields are marked *. Your email address will not be published.