Strategic capacity decision‐making in a stochastic manufacturing environment using real‐time approximate dynamic programming

Strategic capacity decision‐making in a stochastic manufacturing environment using real‐time approximate dynamic programming

0.00 Avg rating0 Votes
Article ID: iaor20103136
Volume: 57
Issue: 3
Start Page Number: 211
End Page Number: 224
Publication Date: Apr 2010
Journal: Naval Research Logistics
Authors: , ,
Keywords: programming: dynamic
Abstract:

In this study, we illustrate a real‐time approximate dynamic programming (RTADP) method for solving multistage capacity decision problems in a stochastic manufacturing environment, by using an exemplary three‐stage manufacturing system with recycle. The system is a moderate size queuing network, which experiences stochastic variations in demand and product yield. The dynamic capacity decision problem is formulated as a Markov decision process (MDP). The proposed RTADP method starts with a set of heuristics and learns a superior quality solution by interacting with the stochastic system via simulation. The curse‐of‐dimensionality associated with DP methods is alleviated by the adoption of several notions including ‘evolving set of relevant states,’ for which the value function table is built and updated, ‘adaptive action set’ for keeping track of attractive action candidates, and ‘nonparametric k nearest neighbor averager’ for value function approximation. The performance of the learned solution is evaluated against (1) an ‘ideal’ solution derived using a mixed integer programming (MIP) formulation, which assumes full knowledge of future realized values of the stochastic variables (2) a myopic heuristic solution, and (3) a sample path based rolling horizon MIP solution. The policy learned through the RTADP method turned out to be superior to polices of 2 and 3.

Reviews

Required fields are marked *. Your email address will not be published.