Linearly parameterized bandits

Linearly parameterized bandits

0.00 Avg rating0 Votes
Article ID: iaor20104592
Volume: 35
Issue: 2
Start Page Number: 395
End Page Number: 411
Publication Date: May 2010
Journal: Mathematics of Operations Research
Authors: ,
Keywords: bandit problems
Abstract:

We consider bandit problems involving a large (possibly infinite) collection of arms, in which the expected reward of each arm is a linear function of an r-dimensional random vector Z ∈ ℝr, where r ≥ 2. The objective is to minimize the cumulative regret and Bayes risk. When the set of arms corresponds to the unit sphere, we prove that the regret and Bayes risk is of order Θ(r √T), by establishing a lower bound for an arbitrary policy, and showing that a matching upper bound is obtained through a policy that alternates between exploration and exploitation phases. The phase-based policy is also shown to be effective if the set of arms satisfies a strong convexity condition. For the case of a general set of arms, we describe a near-optimal policy whose regret and Bayes risk admit upper bounds of the form O(r √T log3/2 T).

Reviews

Required fields are marked *. Your email address will not be published.