no code implementations • 12 Feb 2024 • Yuval Filmus, Steve Hanneke, Idan Mehalel, Shay Moran
We demonstrate that the optimal mistake bound under bandit feedback is at most $O(k)$ times higher than the optimal mistake bound in the full information case, where $k$ represents the number of labels.
no code implementations • 27 Feb 2023 • Yuval Filmus, Steve Hanneke, Idan Mehalel, Shay Moran
We prove an analogous result for randomized learners: we show that the optimal expected mistake bound in learning a class $\mathcal{H}$ equals its randomized Littlestone dimension, which is the largest $d$ for which there exists a tree shattered by $\mathcal{H}$ whose average depth is $2d$.
no code implementations • 9 Jun 2022 • Yuval Filmus, Idan Mehalel, Shay Moran
Given a learning task where the data is distributed among several parties, communication is one of the fundamental resources which the parties would like to minimize.
no code implementations • 18 Apr 2021 • Ofir Gordon, Yuval Filmus, Oren Salzman
In this work we revisit the complexity analysis of CBS to provide tighter bounds on the algorithm's run-time in the worst-case.
no code implementations • 5 Nov 2016 • Yuval Dagan, Yuval Filmus, Ariel Gabizon, Shay Moran
An optimal strategy for the "20 questions" game is given by a Huffman code for $\pi$: Bob's questions reveal the codeword for $x$ bit by bit.