no code implementations • NeurIPS 2018 • Vikas K. Garg, Ofer Dekel, Lin Xiao

We present a new machine learning technique for training small resource-constrained predictors.

no code implementations • NeurIPS 2017 • Ofer Dekel, Arthur Flajolet, Nika Haghtalab, Patrick Jaillet

We show that the player can benefit from such a hint if the set of feasible actions is sufficiently round.

2 code implementations • ICML 2017 • Tolga Bolukbasi, Joseph Wang, Ofer Dekel, Venkatesh Saligrama

We first pose an adaptive network evaluation scheme, where we learn a system to adaptively choose the components of a deep network to be evaluated for each example.

no code implementations • 29 Dec 2016 • Ofer Dekel

Linear predictors are especially useful when the data is high-dimensional and sparse.

no code implementations • NeurIPS 2015 • Ofer Dekel, Ronen Eldan, Tomer Koren

The best algorithm for the general bandit convex optimization problem guarantees a regret of $\widetilde{O}(T^{5/6})$, while the best known lower bound is $\Omega(T^{1/2})$.

no code implementations • 26 Feb 2015 • Noga Alon, Nicolò Cesa-Bianchi, Ofer Dekel, Tomer Koren

We study a general class of online learning problems where the feedback is specified by a graph.

no code implementations • 23 Feb 2015 • Sébastien Bubeck, Ofer Dekel, Tomer Koren, Yuval Peres

We analyze the minimax regret of the adversarial bandit convex optimization problem.

no code implementations • NeurIPS 2014 • Ofer Dekel, Elad Hazan, Tomer Koren

We study an online learning setting where the player is temporarily deprived of feedback each time it switches to a different action.

no code implementations • 18 May 2014 • Ofer Dekel, Jian Ding, Tomer Koren, Yuval Peres

This class includes problems where the algorithm's loss is the minimum over the recent adversarial values, the maximum over the recent values, or a linear combination of the recent values.

no code implementations • NeurIPS 2013 • Nicolò Cesa-Bianchi, Ofer Dekel, Ohad Shamir

In particular, we show that with switching costs, the attainable rate with bandit feedback is $T^{2/3}$.

no code implementations • 11 Oct 2013 • Ofer Dekel, Jian Ding, Tomer Koren, Yuval Peres

We prove that the player's $T$-round minimax regret in this setting is $\widetilde{\Theta}(T^{2/3})$, thereby closing a fundamental gap in our understanding of learning with bandit feedback.

no code implementations • NeurIPS 2013 • Nicolo Cesa-Bianchi, Ofer Dekel, Ohad Shamir

In particular, we show that with switching costs, the attainable rate with bandit feedback is $\widetilde{\Theta}(T^{2/3})$.

no code implementations • NeurIPS 2009 • Ofer Dekel

While many advances have already been made on the topic of hierarchical classi- ﬁcation learning, we take a step back and examine how a hierarchical classiﬁca- tion problem should be formally deﬁned.

no code implementations • NeurIPS 2008 • Ofer Dekel

We present cutoff averaging", a technique for converting any conservative online learning algorithm into a batch learning algorithm.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.