Nearly Tight Bounds for Robust Proper Learning of Halfspaces with a Margin

We study the problem of {\em properly} learning large margin halfspaces in the agnostic PAC model. In more detail, we study the complexity of properly learning $d$-dimensional halfspaces on the unit ball within misclassification error $\alpha \cdot \mathrm{OPT}_{\gamma} + \epsilon$, where $\mathrm{OPT}_{\gamma}$ is the optimal $\gamma$-margin error rate and $\alpha \geq 1$ is the approximation ratio. We give learning algorithms and computational hardness results for this problem, for all values of the approximation ratio $\alpha \geq 1$, that are nearly-matching for a range of parameters. Specifically, for the natural setting that $\alpha$ is any constant bigger than one, we provide an essentially tight complexity characterization. On the positive side, we give an $\alpha = 1.01$-approximate proper learner that uses $O(1/(\epsilon^2\gamma^2))$ samples (which is optimal) and runs in time $\mathrm{poly}(d/\epsilon) \cdot 2^{\tilde{O}(1/\gamma^2)}$. On the negative side, we show that {\em any} constant factor approximate proper learner has runtime $\mathrm{poly}(d/\epsilon) \cdot 2^{(1/\gamma)^{2-o(1)}}$, assuming the Exponential Time Hypothesis.

PDF Abstract NeurIPS 2019 PDF NeurIPS 2019 Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here