Article ID: | iaor20164702 |
Volume: | 63 |
Issue: | 5 |
Start Page Number: | 1058 |
End Page Number: | 1076 |
Publication Date: | Oct 2015 |
Journal: | Operations Research |
Authors: | Van Roy Benjamin, Park Beomsoo |
Keywords: | investment, simulation, learning, control, programming: quadratic, combinatorial optimization |
We consider a model in which a trader aims to maximize expected risk‐adjusted profit while trading a single security. In our model, each price change is a linear combination of observed factors, impact resulting from the trader’s current and prior activity, and unpredictable random effects. The trader must learn coefficients of a price impact model while trading. We propose a new method for simultaneous execution and learning–the confidence‐triggered regularized adaptive certainty equivalent (CTRACE) policy–and establish a poly‐logarithmic finite‐time expected regret bound. In addition, we demonstrate via Monte Carlo simulation that CTRACE outperforms the certainty equivalent policy and a recently proposed reinforcement learning algorithm that is designed to explore efficiently in linear‐quadratic control problems.