Article ID: | iaor20164050 |
Volume: | 63 |
Issue: | 6 |
Start Page Number: | 433 |
End Page Number: | 448 |
Publication Date: | Sep 2016 |
Journal: | Naval Research Logistics (NRL) |
Authors: | Shi Cong, Zhang Huanan, Qin Chao, Hua Cheng |
Keywords: | markov processes, combinatorial optimization, stochastic processes, demand |
We study an admission control model in revenue management with nonstationary and correlated demands over a finite discrete time horizon. The arrival probabilities are updated by current available information, that is, past customer arrivals and some other exogenous information. We develop a regret‐based framework, which measures the difference in revenue between a clairvoyant optimal policy that has access to all realizations of randomness a priori and a given feasible policy which does not have access to this future information. This regret minimization framework better spells out the trade‐offs of each accept/reject decision. We proceed using the lens of approximation algorithms to devise a conceptually simple regret‐parity policy. We show the proposed policy achieves 2‐approximation of the optimal policy in terms of total regret for a two‐class problem, and then extend our results to a multiclass problem with a fairness constraint. Our goal in this article is to make progress toward understanding the marriage between stochastic regret minimization and approximation algorithms in the realm of revenue management and dynamic resource allocation.