Dynamic process improvement

Dynamic process improvement

0.00 Avg rating0 Votes
Article ID: iaor1989322
Country: United States
Volume: 37
Issue: 4
Start Page Number: 580
End Page Number: 591
Publication Date: Jul 1989
Journal: Operations Research
Authors: ,
Keywords: programming: markov decision, inventory
Abstract:

This paper explores the economics so investing in gradual process improvement, a key component, with empirically supported importance, of the well known Just-in-Time and Total Quality Control philosophies. The authors formulate a Markov decision process, analyze it, and apply it to the problem of setup reduction and process quality improvement. Instead of a one-time investment opportunity for a large predictable technological advance, they allow many smaller investments over time, with potential process improvements of random magnitude. The authors use a somewhat nonstandard formulation of the immediate return, which facilitates the derivation of results. The policy that simply maximizes the immediate return, called the last chance policy, provides an upper bound on the optimal investment amount. Furthermore, if the last chance policy invests in process improvement, then so does the optimal policy, Each continues investing until a shared target state is attained. The authors derive fairly restrictive conditions that must be met for the policy of investing forever in process improvements to be optimal. Decreasing the uncertainty of the process (making the potential improvements more predictable) has a desirable effect: the total return is increased and the target state increases, so the ultimate system is more productive. Numerical examples are presented and analyzed.

Reviews

Required fields are marked *. Your email address will not be published.