# Learning Contextual Bandits in a Non-stationary Environment

23 May 2018

Multi-armed bandit algorithms have become a reference solution for handling the explore/exploit dilemma in recommender systems, and many other important real-world problems, such as display advertisement. However, such algorithms usually assume a stationary reward distribution, which hardly holds in practice as users' preferences are dynamic... (read more)

PDF Abstract