Search Results for author: Lu Tang

Found 5 papers, 4 papers with code

Covariate-guided Bayesian mixture model for multivariate time series

1 code implementation3 Jan 2023 Haoyi Fu, Lu Tang, Ori Rosen, Alison E. Hipwell, Theodore J. Huppert, Robert T. Krafty

In this paper, we propose a group-based method to cluster a collection of multivariate time series via a Bayesian mixture of smoothing splines.

Time Series Time Series Analysis

RISE: Robust Individualized Decision Learning with Sensitive Variables

1 code implementation12 Nov 2022 Xiaoqing Tan, Zhengling Qi, Christopher W. Seymour, Lu Tang

This paper introduces RISE, a robust individualized decision learning framework with sensitive variables, where sensitive variables are collectible data and important to the intervention decision, but their inclusion in decision making is prohibited due to reasons such as delayed availability or fairness concerns.

Decision Making Fairness

A Tree-based Model Averaging Approach for Personalized Treatment Effect Estimation from Heterogeneous Data Sources

1 code implementation10 Mar 2021 Xiaoqing Tan, Chung-Chou H. Chang, Ling Zhou, Lu Tang

We propose a tree-based model averaging approach to improve the estimation accuracy of conditional average treatment effects (CATE) at a target site by leveraging models derived from other potentially heterogeneous sites, without them sharing subject-level data.

Federated Learning

A sparse negative binomial mixture model for clustering RNA-seq count data

1 code implementation5 Dec 2019 Tanbin Rahman, Yujia Li, Tianzhou Ma, Lu Tang, George Tseng

With the prevalence of RNA-seq technology and lack of count data modeling for clustering, the current practice is to normalize count expression data into continuous measures and apply existing models with Gaussian assumption.

Clustering feature selection +1

Method of Contraction-Expansion (MOCE) for Simultaneous Inference in Linear Models

no code implementations4 Aug 2019 Fei Wang, Ling Zhou, Lu Tang, Peter X. -K. Song

To establish a simultaneous post-model selection inference, we propose a method of contraction and expansion (MOCE) along the line of debiasing estimation that enables us to balance the bias-and-variance trade-off so that the super-sparsity assumption may be relaxed.

Model Selection

Cannot find the paper you are looking for? You can Submit a new open access paper.