Mila > Publication > Garbage In, Reward Out: Bootstrapping Exploration in Multi-Armed Bandits

Garbage In, Reward Out: Bootstrapping Exploration in Multi-Armed Bandits

Nov 2018

Garbage In, Reward Out: Bootstrapping Exploration in Multi-Armed Bandits

Nov 2018

We propose a multi-armed bandit algorithm that explores based on randomizing its history. The key idea is to estimate the value of the arm from the bootstrap sample of its history, where we add pseudo observations after each pull of the arm. The pseudo observations seem to be harmful. But on the contrary, they guarantee that the bootstrap sample is optimistic with a high probability. Because of this, we call our algorithm Giro, which is an abbreviation for garbage in, reward out. We analyze Giro in a K-armed Bernoulli bandit and prove a O(KΔ1logn) bound on its n-round regret, where Δ denotes the difference in the expected rewards of the optimal and best suboptimal arms. The main advantage of our exploration strategy is that it can be applied to any reward function generalization, such as neural networks. We evaluate Giro and its contextual variant on multiple synthetic and real-world problems, and observe that Giro is comparable to or better than state-of-the-art algorithms.

Reference

Linked Profiles