no code implementations • NeurIPS 2017 • Daniel Hsu, Kevin Shi, Xiaorui Sun
Next, in an average-case and noise-free setting where the responses exactly correspond to a linear function of i. i. d.
no code implementations • NeurIPS 2014 • Siu-On Chan, Ilias Diakonikolas, Rocco A. Servedio, Xiaorui Sun
The "approximation factor" $C$ in our result is inherent in the problem, as we prove that no algorithm with sample size bounded in terms of $k$ and $\epsilon$ can achieve $C<2$ regardless of what kind of hypothesis distribution it uses.
no code implementations • 14 May 2013 • Siu-On Chan, Ilias Diakonikolas, Rocco A. Servedio, Xiaorui Sun
We give an algorithm that draws $\tilde{O}(t\new{(d+1)}/\eps^2)$ samples from $p$, runs in time $\poly(t, d, 1/\eps)$, and with high probability outputs a piecewise polynomial hypothesis distribution $h$ that is $(O(\tau)+\eps)$-close (in total variation distance) to $p$.