Article ID: | iaor19881226 |
Country: | United States |
Volume: | 34 |
Start Page Number: | 335 |
End Page Number: | 184 |
Publication Date: | Mar 1989 |
Journal: | IEEE Transactions On Automatic Control |
Authors: | Luh Peter B., Miao Xi-Yi, Chang Shi-Chung, Castanon David A. |
Keywords: | markov processes |
In this note, the authors study a class of renewable resource allocation problems for the processing of dynamically arriving tasks with deterministic deadlines. This class of problems has many applictions. However, they conform to neither the standard resource allocation model nor the standard optimal control model. A new problem formulation has to be developed and analyzed to provide a satisfactory answer to these problems. The model presented here explicitly considers time available, time required, resource available, resource required, stochastic arrivals of multiple types of tasks, importance of tasks, timeliness of processing, and accuracy of resource allocation. After state augmentation, the problem becomes a Markovian decision problem, and can be solved, at least in principle, by using a dynamic programming (SDP) method. For a problem with infinite planning horizon, the optimal strategy is shown to be stationary under mild conditions. An SDP algorithm based on a successive approximation technique is developed to obtain the optimal stationary strategy. Effects of key system parameters on optimal decisions are investigated and analyzed through numerical examples. As the computational complexity of the algorithm is of exponential increase, practical application of the algorithm is limited to problems of small size. Two heuristic rules are therefore investigated and compared to the optimal policy. The result of this study serves as a starting point for further characterization of the optimal policy, for understanding and designing effective heuristic rules, and for developing (in conjunction with experimental studies) normative-descriptive models of human task selection and resource allocation.