Dynamic Programming for Mean-Field Type Control

Dynamic Programming for Mean-Field Type Control

0.00 Avg rating0 Votes
Article ID: iaor20162399
Volume: 169
Issue: 3
Start Page Number: 902
End Page Number: 924
Publication Date: Jun 2016
Journal: Journal of Optimization Theory and Applications
Authors: ,
Keywords: optimization, simulation, allocation: resources, combinatorial optimization, stochastic processes
Abstract:

We investigate a model problem for optimal resource management. The problem is a stochastic control problem of mean‐field type. We compare a Hamilton–Jacobi–Bellman fixed‐point algorithm to a steepest descent method issued from calculus of variations. For mean‐field type control problems, stochastic dynamic programming requires adaptation. The problem is reformulated as a distributed control problem by using the Fokker–Planck equation for the probability distribution of the stochastic process; then, an extended Bellman’s principle is derived by a different argument than the one used by P. L. Lions. Both algorithms are compared numerically.

Reviews

Required fields are marked *. Your email address will not be published.