no code implementations • 5 Apr 2023 • Tung Mai, Alexander Munteanu, Cameron Musco, Anup B. Rao, Chris Schwiegelshohn, David P. Woodruff
For this problem, under the $\ell_2$ norm, we observe an upper bound of $O(k \log (d)/\varepsilon + k\log(k/\varepsilon)/\varepsilon^2)$ rows, showing that sparse recovery is strictly easier to sketch than sparse regression.
no code implementations • NeurIPS 2021 • Tung Mai, Anup B. Rao, Cameron Musco
It also does not depend on the specific loss function, so a single coreset can be used in multiple training scenarios.
1 code implementation • 8 Sep 2020 • My Phan, David Arbour, Drew Dimmery, Anup B. Rao
To reduce the variance of our estimator, we design a covariate balance condition (Target Balance) between the treatment and control groups based on the target population.
Methodology
no code implementations • 6 May 2019 • David Durfee, Yu Gao, Anup B. Rao, Sebastian Wild
We give an algorithm to compute a one-dimensional shape-constrained function that best fits given data in weighted-$L_{\infty}$ norm.
2 code implementations • 24 Apr 2016 • Kevin A. Lai, Anup B. Rao, Santosh Vempala
We consider the problem of estimating the mean and covariance of a distribution from iid samples in $\mathbb{R}^n$, in the presence of an $\eta$ fraction of malicious noise; this is in contrast to much recent work where the noise itself is assumed to be from a distribution of known type.