Article ID: | iaor19911787 |
Country: | Switzerland |
Volume: | 28 |
Start Page Number: | 243 |
End Page Number: | 260 |
Publication Date: | Apr 1991 |
Journal: | Annals of Operations Research |
Authors: | Lamond B.F. |
Keywords: | markov processes |
A service system with a single server, a finite waiting room and two classes of customers with deterministic service time are considered. Primary jobs arrive at random and are admitted whenever there is room in the system. At the beginning of each period, secondary jobs can be admitted from an infinite pool. A revenue is earned upon admission of each job, with the primary jobs bringing a higher contribution than the secondary jobs, the objective being to maximize the total discounted revenue over a infinite horizon. The system is modeled as a discrete time Markov decision process and it is shown that a monotone admission policy is optimal when the number of primary arrivals has a fixed distribution. Moreover, when the primary arrival distribution varies with time according to a finite state Markov chain, it is shown that the optimal policy is conditionally monotone and that the numerical computation of an optimal policy, in this case, is substantially more difficult than in the case of stationary arrivals.