Bayesian Dynamic Pricing Policies: Learning and Earning Under a Binary Prior Distribution

Bayesian Dynamic Pricing Policies: Learning and Earning Under a Binary Prior Distribution

0.00 Avg rating0 Votes
Article ID: iaor20122990
Volume: 58
Issue: 3
Start Page Number: 570
End Page Number: 586
Publication Date: Mar 2012
Journal: Management Science
Authors: , ,
Keywords: stochastic processes, demand, learning
Abstract:

Motivated by applications in financial services, we consider a seller who offers prices sequentially to a stream of potential customers, observing either success or failure in each sales attempt. The parameters of the underlying demand model are initially unknown, so each price decision involves a trade‐off between learning and earning. Attention is restricted to the simplest kind of model uncertainty, where one of two demand models is known to apply, and we focus initially on performance of the myopic Bayesian policy (MBP), variants of which are commonly used in practice. Because learning is passive under the MBP (that is, learning only takes place as a by‐product of actions that have a different purpose), it can lead to incomplete learning and poor profit performance. However, under one additional assumption, a constrained variant of the myopic policy is shown to have the following strong theoretical virtue: the expected performance gap relative to a clairvoyant who knows the underlying demand model is bounded by a constant as the number of sales attempts becomes large.

Reviews

Required fields are marked *. Your email address will not be published.