Article ID: | iaor1990285 |
Country: | United States |
Volume: | 37 |
Issue: | 5 |
Start Page Number: | 791 |
End Page Number: | 797 |
Publication Date: | Sep 1989 |
Journal: | Operations Research |
Authors: | White Chelsea C., Scherer W.T. |
Keywords: | markov processes |
The authors present three algorithms to solve the infinite horizon, expected discounted total reward partially observed Markov decision process (POMDP). Each algorithm integrates a successive approximations algorithm for the POMDP due to A. Smallwood and E. Sondik with an appropriately generalized numerical technique that has been shown to reduce CPU time until convergence for the completely observed case. The first technique is reward revision. The second technique is reward revision integrated with modified policy iteration. The third is a standard extrapolation. A numerical study indicates that potentially significant computational value of these algorithms.