Article ID: | iaor20021982 |
Country: | United States |
Volume: | 49 |
Issue: | 3 |
Start Page Number: | 398 |
End Page Number: | 412 |
Publication Date: | May 2001 |
Journal: | Operations Research |
Authors: | Kitanidis Peter K., Philbrick C. Russell |
Keywords: | markov processes |
New dynamic programming methods are developed to solve stochastic control problems with a larger number of state variables than previously possible. These methods apply accurate interpolation to numerical approximation of continuous cost-to-go functions, greatly reducing the number of discrete states that must be evaluated. By efficiently incorporating information on first and second derivatives, the approximation reduces computational effort by several orders of magnitude over traditional methods. Consequently, it is practical to apply dynamic programming to complex stochastic problems with a larger number of state variables than traditionally possible. Results are presented for hypothetical reservoir control problems with up to seven state variables and two random inputs.