no code implementations • 17 Aug 2021 • Olivier Bousquet, Mark Braverman, Klim Efremenko, Gillat Kol, Shay Moran
We derive an optimal $2$-approximation learning strategy for the Hypothesis Selection problem, outputting $q$ such that $\mathsf{TV}(p, q) \leq2 \cdot opt + \eps$, with a (nearly) optimal sample complexity of~$\tilde O(\log n/\epsilon^2)$.
no code implementations • 22 Feb 2021 • Lijie Chen, Gillat Kol, Dmitry Paramonov, Raghuvansh Saxena, Zhao Song, Huacheng Yu
In addition, we show a similar $\tilde{\Theta}(n \cdot \sqrt{L})$ bound on the space complexity of any algorithm (with any number of passes) for the related problem of sampling an $L$-step random walk from every vertex in the graph.
Data Structures and Algorithms Computational Complexity
no code implementations • 8 Sep 2019 • Mark Braverman, Gillat Kol, Shay Moran, Raghuvansh R. Saxena
For Convex Set Disjointness (and the equivalent task of distributed LP feasibility) we derive upper and lower bounds of $\tilde O(d^2\log n)$ and~$\Omega(d\log n)$.