Search Results for author: Prasad Patil

Found 4 papers, 3 papers with code

Multi-Study R-Learner for Estimating Heterogeneous Treatment Effects Across Studies Using Statistical Machine Learning

1 code implementation1 Jun 2023 Cathy Shyr, Boyu Ren, Prasad Patil, Giovanni Parmigiani

To this end, we propose a framework for multi-study HTE estimation that accounts for between-study heterogeneity in the nuisance functions and treatment effects.

Multi-Study Boosting: Theoretical Considerations for Merging vs. Ensembling

1 code implementation11 Jul 2022 Cathy Shyr, Pragya Sur, Giovanni Parmigiani, Prasad Patil

In the regression setting, we provide theoretical guidelines based on an analytical transition point to determine whether it is more beneficial to merge or to ensemble for boosting with linear learners.

Representation via Representations: Domain Generalization via Adversarially Learned Invariant Representations

no code implementations20 Jun 2020 Zhun Deng, Frances Ding, Cynthia Dwork, Rachel Hong, Giovanni Parmigiani, Prasad Patil, Pragya Sur

We study an adversarial loss function for $k$ domains and precisely characterize its limiting behavior as $k$ grows, formalizing and proving the intuition, backed by experiments, that observing data from a larger number of domains helps.

Domain Generalization Fairness

Merging versus Ensembling in Multi-Study Prediction: Theoretical Insight from Random Effects

1 code implementation17 May 2019 Zoe Guan, Giovanni Parmigiani, Prasad Patil

A critical decision point when training predictors using multiple studies is whether these studies should be combined or treated separately.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.