Search Results for author: Predrag Klasnja

Found 12 papers, 2 papers with code

AI-Assisted Causal Pathway Diagram for Human-Centered Design

1 code implementation12 Mar 2024 Ruican Zhong, Donghoon Shin, Rosemary Meza, Predrag Klasnja, Lucas Colusso, Gary Hsieh

This paper explores the integration of causal pathway diagrams (CPD) into human-centered design (HCD), investigating how these diagrams can enhance the early stages of the design process.

Effect-Invariant Mechanisms for Policy Generalization

no code implementations19 Jun 2023 Sorawit Saengkyongam, Niklas Pfister, Predrag Klasnja, Susan Murphy, Jonas Peters

A major challenge in policy learning is how to adapt efficiently to unseen environments or tasks.

Assessing the Impact of Context Inference Error and Partial Observability on RL Methods for Just-In-Time Adaptive Interventions

no code implementations17 May 2023 Karine Karine, Predrag Klasnja, Susan A. Murphy, Benjamin M. Marlin

Just-in-Time Adaptive Interventions (JITAIs) are a class of personalized health interventions developed within the behavioral science community.

Did we personalize? Assessing personalization by an online reinforcement learning algorithm using resampling

1 code implementation11 Apr 2023 Susobhan Ghosh, Raphael Kim, Prasidh Chhabria, Raaz Dwivedi, Predrag Klasnja, Peng Liao, Kelly Zhang, Susan Murphy

We use a working definition of personalization and introduce a resampling-based methodology for investigating whether the personalization exhibited by the RL algorithm is an artifact of the RL algorithm stochasticity.

Decision Making Reinforcement Learning (RL)

Doubly robust nearest neighbors in factor models

no code implementations25 Nov 2022 Raaz Dwivedi, Katherine Tian, Sabina Tomkins, Predrag Klasnja, Susan Murphy, Devavrat Shah

We consider a matrix completion problem with missing data, where the $(i, t)$-th entry, when observed, is given by its mean $f(u_i, v_t)$ plus mean-zero noise for an unknown function $f$ and latent factors $u_i$ and $v_t$.

counterfactual Counterfactual Inference +1

Counterfactual inference for sequential experiments

no code implementations14 Feb 2022 Raaz Dwivedi, Katherine Tian, Sabina Tomkins, Predrag Klasnja, Susan Murphy, Devavrat Shah

Our goal is to provide inference guarantees for the counterfactual mean at the smallest possible scale -- mean outcome under different treatments for each unit and each time -- with minimal assumptions on the adaptive treatment policy.

counterfactual Counterfactual Inference +3

IntelligentPooling: Practical Thompson Sampling for mHealth

no code implementations31 Jul 2020 Sabina Tomkins, Peng Liao, Predrag Klasnja, Susan Murphy

In this work we are concerned with the following challenges: 1) individuals who are in the same context can exhibit differential response to treatments 2) only a limited amount of data is available for learning on any one individual, and 3) non-stationary responses to treatment.

reinforcement-learning Reinforcement Learning (RL) +1

Batch Policy Learning in Average Reward Markov Decision Processes

no code implementations23 Jul 2020 Peng Liao, Zhengling Qi, Runzhe Wan, Predrag Klasnja, Susan Murphy

The performance of the method is illustrated by simulation studies and an analysis of a mobile health study promoting physical activity.

Rapidly Personalizing Mobile Health Treatment Policies with Limited Data

no code implementations23 Feb 2020 Sabina Tomkins, Peng Liao, Predrag Klasnja, Serena Yeung, Susan Murphy

In mobile health (mHealth), reinforcement learning algorithms that adapt to one's context without learning personalized policies might fail to distinguish between the needs of individuals.

Reinforcement Learning (RL)

Off-Policy Estimation of Long-Term Average Outcomes with Applications to Mobile Health

no code implementations30 Dec 2019 Peng Liao, Predrag Klasnja, Susan Murphy

The mHealth intervention policies, often called just-in-time adaptive interventions, are decision rules that map an individual's current state (e. g., individual's past behaviors as well as current observations of time, location, social activity, stress and urges to smoke) to a particular treatment at each of many time points.

Personalized HeartSteps: A Reinforcement Learning Algorithm for Optimizing Physical Activity

no code implementations8 Sep 2019 Peng Liao, Kristjan Greenewald, Predrag Klasnja, Susan Murphy

In this paper, we develop a Reinforcement Learning (RL) algorithm that continuously learns and improves the treatment policy embedded in the JITAI as the data is being collected from the user.

reinforcement-learning Reinforcement Learning (RL)

Personalizing Intervention Probabilities By Pooling

no code implementations2 Dec 2018 Sabina Tomkins, Predrag Klasnja, Susan Murphy

In many mobile health interventions, treatments should only be delivered in a particular context, for example when a user is currently stressed, walking or sedentary.

Cannot find the paper you are looking for? You can Submit a new open access paper.