no code implementations • 23 Feb 2024 • Xin Lyu, Hongxun Wu, Junzhao Yang
Karbasi and Larsen showed that "significant" parallelization must incur exponential blow-up: Any boosting algorithm either interacts with the weak learner for $\Omega(1 / \gamma)$ rounds or incurs an $\exp(d / \gamma)$ blow-up in the complexity of training, where $d$ is the VC dimension of the hypothesis class.
no code implementations • 12 Oct 2023 • Xin Lyu, Avishay Tal, Hongxun Wu, Junzhao Yang
In this work, for any constant $q$, we prove tight memory-sample lower bounds for any parity learning algorithm that makes $q$ passes over the stream of samples.
no code implementations • 30 Jun 2023 • Moses Charikar, Prasanna Ramakrishnan, Kangning Wang, Hongxun Wu
To do so we study a handful of voting rules that are new to the problem.