A clustering-based reinforcement learning approach for tailored personalization of e-Health interventions

Personalization is very powerful in improving the effectiveness of health interventions. Reinforcement learning (RL) algorithms are suitable for learning these tailored interventions from sequential data collected about individuals. However, learning can be very fragile. The time to learn intervention policies is limited as disengagement from the user can occur quickly. Also, in e-Health intervention timing can be crucial before the optimal window passes. We present an approach that learns tailored personalization policies for groups of users by combining RL and clustering. The benefits are two-fold: speeding up the learning to prevent disengagement while maintaining a high level of personalization. Our clustering approach utilizes dynamic time warping to compare user trajectories consisting of states and rewards. We apply online and batch RL to learn policies over clusters of individuals and introduce our self-developed and publicly available simulator for e-Health interventions to evaluate our approach. We compare our methods with an e-Health intervention benchmark. We demonstrate that batch learning outperforms online learning for our setting. Furthermore, our proposed clustering approach for RL finds near-optimal clusterings which lead to significantly better policies in terms of cumulative reward compared to learning a policy per individual or learning one non-personalized policy across all individuals. Our findings also indicate that the learned policies accurately learn to send interventions at the right moments and that the users workout more and at the right times of the day.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here