1 code implementation • 12 Mar 2024 • Ruican Zhong, Donghoon Shin, Rosemary Meza, Predrag Klasnja, Lucas Colusso, Gary Hsieh
This paper explores the integration of causal pathway diagrams (CPD) into human-centered design (HCD), investigating how these diagrams can enhance the early stages of the design process.
no code implementations • 19 Jun 2023 • Sorawit Saengkyongam, Niklas Pfister, Predrag Klasnja, Susan Murphy, Jonas Peters
A major challenge in policy learning is how to adapt efficiently to unseen environments or tasks.
no code implementations • 17 May 2023 • Karine Karine, Predrag Klasnja, Susan A. Murphy, Benjamin M. Marlin
Just-in-Time Adaptive Interventions (JITAIs) are a class of personalized health interventions developed within the behavioral science community.
1 code implementation • 11 Apr 2023 • Susobhan Ghosh, Raphael Kim, Prasidh Chhabria, Raaz Dwivedi, Predrag Klasnja, Peng Liao, Kelly Zhang, Susan Murphy
We use a working definition of personalization and introduce a resampling-based methodology for investigating whether the personalization exhibited by the RL algorithm is an artifact of the RL algorithm stochasticity.
no code implementations • 25 Nov 2022 • Raaz Dwivedi, Katherine Tian, Sabina Tomkins, Predrag Klasnja, Susan Murphy, Devavrat Shah
We consider a matrix completion problem with missing data, where the $(i, t)$-th entry, when observed, is given by its mean $f(u_i, v_t)$ plus mean-zero noise for an unknown function $f$ and latent factors $u_i$ and $v_t$.
no code implementations • 14 Feb 2022 • Raaz Dwivedi, Katherine Tian, Sabina Tomkins, Predrag Klasnja, Susan Murphy, Devavrat Shah
Our goal is to provide inference guarantees for the counterfactual mean at the smallest possible scale -- mean outcome under different treatments for each unit and each time -- with minimal assumptions on the adaptive treatment policy.
no code implementations • 31 Jul 2020 • Sabina Tomkins, Peng Liao, Predrag Klasnja, Susan Murphy
In this work we are concerned with the following challenges: 1) individuals who are in the same context can exhibit differential response to treatments 2) only a limited amount of data is available for learning on any one individual, and 3) non-stationary responses to treatment.
no code implementations • 23 Jul 2020 • Peng Liao, Zhengling Qi, Runzhe Wan, Predrag Klasnja, Susan Murphy
The performance of the method is illustrated by simulation studies and an analysis of a mobile health study promoting physical activity.
no code implementations • 23 Feb 2020 • Sabina Tomkins, Peng Liao, Predrag Klasnja, Serena Yeung, Susan Murphy
In mobile health (mHealth), reinforcement learning algorithms that adapt to one's context without learning personalized policies might fail to distinguish between the needs of individuals.
no code implementations • 30 Dec 2019 • Peng Liao, Predrag Klasnja, Susan Murphy
The mHealth intervention policies, often called just-in-time adaptive interventions, are decision rules that map an individual's current state (e. g., individual's past behaviors as well as current observations of time, location, social activity, stress and urges to smoke) to a particular treatment at each of many time points.
no code implementations • 8 Sep 2019 • Peng Liao, Kristjan Greenewald, Predrag Klasnja, Susan Murphy
In this paper, we develop a Reinforcement Learning (RL) algorithm that continuously learns and improves the treatment policy embedded in the JITAI as the data is being collected from the user.
no code implementations • 2 Dec 2018 • Sabina Tomkins, Predrag Klasnja, Susan Murphy
In many mobile health interventions, treatments should only be delivered in a particular context, for example when a user is currently stressed, walking or sedentary.