Article ID: | iaor20104884 |
Volume: | 146 |
Issue: | 1 |
Start Page Number: | 189 |
End Page Number: | 207 |
Publication Date: | Jul 2010 |
Journal: | Journal of Optimization Theory and Applications |
Authors: | Udrite C, Tevy I |
This paper justifies dynamic programming PDEs for optimal control problems with performance criteria involving curvilinear integrals. The main novel feature, relative to the known theory, is that the multitime dynamic programming PDEs are now connected to the multitime maximum principle. For the first time, an interesting and useful connection between the multitime maximum principle and the multitime dynamic programming is given, characterizing the optimal control by means of a PDE system that may be viewed as a multitime feedback law. Section 1 describes the roots of our point of view regarding the multitime Hamilton-Jacobi-Bellman PDEs. Section 2 recalls the multitime maximum principle formulated for an optimal control problem with a cost functional including a curvilinear integral and introduces the notion of multitime maximum value function. Section 3 shows how a multitime control dynamics and the multitime maximum value function determine the multitime Hamilton-Jacobi-Bellman PDEs. Section 4 describes how the multitime dynamic programming method can be used in the design of multitime optimal controls. Section 5 shows that the multitime Hamilton PDEs are characteristic equations for the multitime Hamilton-Jacobi-Bellman PDEs and reveals the connection between multitime dynamic programming and the multitime maximum principle.