Article ID: | iaor2002639 |
Country: | United States |
Volume: | 32 |
Issue: | 10 |
Start Page Number: | 907 |
End Page Number: | 919 |
Publication Date: | Oct 2000 |
Journal: | IIE Transactions |
Authors: | Haurie A., Moresino F. |
Keywords: | programming: dynamic, control processes |
This paper proposes and tests an approximation of the solution of a class of piece-wise deterministic control problems, typically used in the modeling of manufacturing flow processes. This approximation uses a stochastic programming approach on a suitably discretized and sampled system. The method proceeds through two stages: (i) the Hamilton–Jacobi–Bellman (HJB) dynamic programming equations for the finite horizon continuous time stochastic control problem are discretized over a set of sampled times; this defines an associated discrete time stochastic control problem which, due to the finiteness of the sample path set for the Markov disturbance process, can be written as a stochastic programming problem; and (ii) the very large event tree representing the sample path set is replaced with a reduced tree obtained by randomly sampling over the set of all possible paths. It is shown that the solution of the stochastic program defined on the randomly sampled tree converges toward the solution of the discrete time control problem when the sample size increases to infinity. The discrete time control problem solution converges to the solution of the flow control problem when the discretization mesh tends to zero. A comparison with a direct numerical solution of the dynamic programmin equations is made for a single part manufacturing flow control model in order to illustrate the convergence properties. Applications to larger models affected by the curse of dimensionality in standard dynamic programming techniques show the possible advantages of the method.