Finite-horizon dynamic optimisation when the terminal reward is a concave functional of the distribution of the final state

Finite-horizon dynamic optimisation when the terminal reward is a concave functional of the distribution of the final state

0.00 Avg rating0 Votes
Article ID: iaor1999338
Country: United Kingdom
Volume: 30
Issue: 1
Start Page Number: 122
End Page Number: 136
Publication Date: Mar 1998
Journal: Advances in Applied Probability
Authors: ,
Keywords: control processes, optimization
Abstract:

We consider a problem similar in many respects to a finite horizon Markov decision process, except that the reward to the individual is a strictly concave functional of the distribution of the state of the individual at final time T. Reward structures such as these are of interest to biologists studying the fitness of different strategies in a fluctuating environment. The problem fails to satisfy the usual optimality equation and cannot be solved directly by dynamic programming. We establish equations characterising the optimal final distribution and an optimal policy π*. We show that in general π* will be a Markov randomised policy (or equivalently a mixture of Markov deterministic policies) and we develop an iterative, policy improvement based algorithm which converges to π*. We also consider an infinite population version of the problem, and show that the population cannot do better using a coordinated policy than by each individual independently following the individual optimal policy π*.

Reviews

Required fields are marked *. Your email address will not be published.