Search Results for author: Abhineet Agarwal

Found 6 papers, 5 papers with code

ED-Copilot: Reduce Emergency Department Wait Time with Language Model Diagnostic Assistance

no code implementations21 Feb 2024 Liwen Sun, Abhineet Agarwal, Aaron Kornblith, Bin Yu, Chenyan Xiong

Using publicly available patient data, we collaborate with ED clinicians to curate MIMIC-ED-Assist, a benchmark that measures the ability of AI systems in suggesting laboratory tests that minimize ED wait times, while correctly predicting critical outcomes such as death.

Language Modelling

MDI+: A Flexible Random Forest-Based Feature Importance Framework

2 code implementations4 Jul 2023 Abhineet Agarwal, Ana M. Kenney, Yan Shuo Tan, Tiffany M. Tang, Bin Yu

We show that the MDI for a feature $X_k$ in each tree in an RF is equivalent to the unnormalized $R^2$ value in a linear regression of the response on the collection of decision stumps that split on $X_k$.

Drug Response Prediction Feature Importance +1

Synthetic Combinations: A Causal Inference Framework for Combinatorial Interventions

1 code implementation NeurIPS 2023 Abhineet Agarwal, Anish Agarwal, Suhas Vijaykumar

Our goal is to learn unit-specific potential outcomes for any combination of these $p$ interventions, i. e., $N \times 2^p$ causal parameters.

Causal Inference Experimental Design

Hierarchical Shrinkage: improving the accuracy and interpretability of tree-based methods

2 code implementations2 Feb 2022 Abhineet Agarwal, Yan Shuo Tan, Omer Ronen, Chandan Singh, Bin Yu

Tree-based models such as decision trees and random forests (RF) are a cornerstone of modern machine-learning practice.

Fast Interpretable Greedy-Tree Sums

2 code implementations28 Jan 2022 Yan Shuo Tan, Chandan Singh, Keyan Nasseri, Abhineet Agarwal, James Duncan, Omer Ronen, Matthew Epland, Aaron Kornblith, Bin Yu

In such settings, practitioners often use highly interpretable decision tree models, but these suffer from inductive bias against additive structure.

Additive models Decision Making +4

A cautionary tale on fitting decision trees to data from additive models: generalization lower bounds

1 code implementation18 Oct 2021 Yan Shuo Tan, Abhineet Agarwal, Bin Yu

We prove a sharp squared error generalization lower bound for a large class of decision tree algorithms fitted to sparse additive models with $C^1$ component functions.

Additive models Decision Making +2

Cannot find the paper you are looking for? You can Submit a new open access paper.