Article ID: | iaor20083482 |
Country: | United Kingdom |
Volume: | 6 |
Issue: | 3 |
Start Page Number: | 188 |
End Page Number: | 199 |
Publication Date: | Sep 2007 |
Journal: | Journal of Revenue and Pricing Management |
Authors: | Chen Victoria C.P., Rosenberger Jay M., Gnther Dirk, Siddappa Sheela |
Keywords: | financial, markov processes, statistics: experiment, statistics: multivariate, yield management |
We present a refinement of a network revenue management method that employs design of experiments and multivariate adaptive regression splines to approximate upper and lower bounds for the Markov decision process (MDP) value function. This approach involves an offline statistical modelling module that approximates the value function to provide a policy for accepting/rejecting customer booking requests, and an online availability processor module that conducts the actual decisions as the booking requests arrive. In the statistical modelling module, the data for the value function upper and lower bound functions are obtained by solving deterministic and stochastic linear programming problems, respectively. The refinement in this paper identifies realistic ranges of remaining seat capacity at different reading periods by adding a state space simulation module preceding the statistical modelling module. This effectively combines the advantages of a design of experiments approach and a simulation-based approach. Simulation results using a real airline network and based on actual demand data demonstrate up to 2.7 per cent improvement over the original method before refinement, which corresponds to a 5.8 per cent improvement over the bid price approach using the deterministic linear programming model to determine bid prices. The state space simulation module could also be applied to improve approximate dynamic programming methods that conduct value function approximation.