Optimal control without solving the Bellman equation

Optimal control without solving the Bellman equation

0.00 Avg rating0 Votes
Article ID: iaor1995406
Country: Netherlands
Volume: 17
Issue: 4
Start Page Number: 621
End Page Number: 630
Publication Date: Jul 1993
Journal: Journal of Economic Dynamics and Control
Authors:
Keywords: programming: dynamic
Abstract:

This paper recommends an alternative to solving the Bellman partial differential equation for the value function in optimal control problems involving stochastic differential or difference equations. It recommends solving for the vector Lagrange multiplier associated with a first-order condition for maximum. The method is preferable to Bellman’s in exploiting this first-order condition and in solving only algebraic equations in the control variable and Lagrange multiplier and its derivatives rather than a functional equation. The solution requires no global approximation of the value function and is likely to be more accurate than methods which are based on global approximations.

Reviews

Required fields are marked *. Your email address will not be published.