Article ID: | iaor19971592 |
Country: | United States |
Volume: | 25 |
Issue: | 11 |
Start Page Number: | 1987 |
End Page Number: | 2000 |
Publication Date: | Nov 1994 |
Journal: | International Journal of Systems Science |
Authors: | Chao X.L. |
Keywords: | programming: dynamic |
Consider an optimal intensity allocation problem of a non-markovian queueing network with a single server. Both the arrival and service processes are modelled by point processes with stochastic intensities. All the stations share a single service facility. After finishing service at one station, a customer switches to another station or leaves the system according to specific probabilities. A linear holding cost is incurred at each station. The objective is to control the arrival intensity at each station and schedule the service intensity at all stations so that the expected discounted cost over an infinite horizon is minimized. The main result is that in the class of non-idling control policies, the optimal intensity is of a bang-bang type and the optimal scheduling rule turns out to be the Klimov’s rule; whereas in the larger class of control policies that permits enforced idling, the optimal control rule is a modified static policy. The approach combines a point process calculus with dynamic programming.