Article ID: | iaor200916821 |
Country: | United States |
Volume: | 19 |
Issue: | 2 |
Start Page Number: | 161 |
End Page Number: | 174 |
Publication Date: | Apr 2007 |
Journal: | INFORMS Journal On Computing |
Authors: | Hu Jiaqiao, Fu Michael C, Ramezani Vahid R, Marcus Steven I |
Keywords: | heuristics |
This paper presents a new randomized search method called evolutionary random policy search (ERPS) for solving infinite–horizon discounted–cost Markov–decision–process (MDP) problems. The algorithm is particularly targeted at problems with large or uncountable action spaces. ERPS approaches a given MDP by iteratively dividing it into a sequence of smaller, random, sub–MDP problems based on information obtained from random sampling of the entire action space and local search. Each sub–MDP is then solved approximately by using a variant of the standard policy–improvement technique, where an elite policy is obtained. We show that the sequence of elite policies converges to an optimal policy with probability one. Some numerical studies are carried out to illustrate the algorithm and compare it with existing procedures.