Stochastic scheduling and forwards induction

Stochastic scheduling and forwards induction

0.00 Avg rating0 Votes
Article ID: iaor1997860
Country: Netherlands
Volume: 57
Issue: 2/3
Start Page Number: 145
End Page Number: 165
Publication Date: Feb 1995
Journal: Discrete Applied Mathematics
Authors:
Keywords: markov processes
Abstract:

The paper considers the problem (J,¦)) of allocating a single machine to the stochastic tasks in J in such a way that precedence constraints ¦) are respected. If it has rewards which are discounted and additive then the problem of determining an optimal policy for scheduling in the class of fully preemptive policies can be formulated as a discounted Markov decision process (MDP). Policies are developed by utilising a principle of forwards induction (FI). Such policies may be thought of as quasi-myopic in that they make choices which maximise a natural measure of the reward rate currently available. A condition is given which is (necessary and) sufficient for the optimality of FI policies and which will be satisfied when ¦)={out-forest}. The notion of reward rate used to develop FI policies can also be used to develop performance bounds for general scheduling policies. These bounds can be used to make probabilistic statements about heuristics (i.e. for randomly chosen (J,¦))). The FI approach can also be used to develop policies for general discounted MDPs. Performance bounds are available which may be used to make probabilistic statements about the performance of FI policies in more complex scheduling environments where optimality results are not available.

Reviews

Required fields are marked *. Your email address will not be published.