Article ID: | iaor20012799 |
Country: | United States |
Volume: | 27 |
Issue: | 3 |
Start Page Number: | 569 |
End Page Number: | 588 |
Publication Date: | Jun 1996 |
Journal: | Decision Sciences |
Authors: | Gupta A., Desai V.S. |
Keywords: | markov processes |
This study falls in the class of models in which advertising wearout and the differences between the learning and forgetting of advertisements are explicitly included. A discrete time Markov decision modeling approach is used to obtain optimal control limit policies, and an algorithm is provided to identify these policies. A control limit policy specifies whether or not to advertise in a specific time period on the basis of the level of awareness in that time period. Thus, the duration for which advertising is not done is determined endogenously, and the algorithm helps determine this duration for a given set of parameters. This is a particularly desirable feature, since advertising practitioners are interested in determining the optimal duration of advertising pulses. Computational experience suggests that the algorithm is very fast and easy to implement. Also, conditions on model parameters indicating the relative efficacy of pulsing versus uniform advertising are provided.