Search Results for author: Ofer Dekel

Found 14 papers, 1 papers with code

Learning SMaLL Predictors

no code implementations NeurIPS 2018 Vikas K. Garg, Ofer Dekel, Lin Xiao

We present a new machine learning technique for training small resource-constrained predictors.

BIG-bench Machine Learning

Online Learning with a Hint

no code implementations NeurIPS 2017 Ofer Dekel, Arthur Flajolet, Nika Haghtalab, Patrick Jaillet

We show that the player can benefit from such a hint if the set of feasible actions is sufficiently round.

Adaptive Neural Networks for Efficient Inference

2 code implementations ICML 2017 Tolga Bolukbasi, Joseph Wang, Ofer Dekel, Venkatesh Saligrama

We first pose an adaptive network evaluation scheme, where we learn a system to adaptively choose the components of a deep network to be evaluated for each example.

Binary Classification

Linear Learning with Sparse Data

no code implementations29 Dec 2016 Ofer Dekel

Linear predictors are especially useful when the data is high-dimensional and sparse.

Translation

Bandit Smooth Convex Optimization: Improving the Bias-Variance Tradeoff

no code implementations NeurIPS 2015 Ofer Dekel, Ronen Eldan, Tomer Koren

The best algorithm for the general bandit convex optimization problem guarantees a regret of $\widetilde{O}(T^{5/6})$, while the best known lower bound is $\Omega(T^{1/2})$.

Online Learning with Feedback Graphs: Beyond Bandits

no code implementations26 Feb 2015 Noga Alon, Nicolò Cesa-Bianchi, Ofer Dekel, Tomer Koren

We study a general class of online learning problems where the feedback is specified by a graph.

Bandit Convex Optimization: sqrt{T} Regret in One Dimension

no code implementations23 Feb 2015 Sébastien Bubeck, Ofer Dekel, Tomer Koren, Yuval Peres

We analyze the minimax regret of the adversarial bandit convex optimization problem.

Thompson Sampling

The Blinded Bandit: Learning with Adaptive Feedback

no code implementations NeurIPS 2014 Ofer Dekel, Elad Hazan, Tomer Koren

We study an online learning setting where the player is temporarily deprived of feedback each time it switches to a different action.

Online Learning with Composite Loss Functions

no code implementations18 May 2014 Ofer Dekel, Jian Ding, Tomer Koren, Yuval Peres

This class includes problems where the algorithm's loss is the minimum over the recent adversarial values, the maximum over the recent values, or a linear combination of the recent values.

Online Learning with Costly Features and Labels

no code implementations NeurIPS 2013 Nicolò Cesa-Bianchi, Ofer Dekel, Ohad Shamir

In particular, we show that with switching costs, the attainable rate with bandit feedback is $T^{2/3}$.

Bandits with Switching Costs: T^{2/3} Regret

no code implementations11 Oct 2013 Ofer Dekel, Jian Ding, Tomer Koren, Yuval Peres

We prove that the player's $T$-round minimax regret in this setting is $\widetilde{\Theta}(T^{2/3})$, thereby closing a fundamental gap in our understanding of learning with bandit feedback.

Online Learning with Switching Costs and Other Adaptive Adversaries

no code implementations NeurIPS 2013 Nicolo Cesa-Bianchi, Ofer Dekel, Ohad Shamir

In particular, we show that with switching costs, the attainable rate with bandit feedback is $\widetilde{\Theta}(T^{2/3})$.

Distribution-Calibrated Hierarchical Classification

no code implementations NeurIPS 2009 Ofer Dekel

While many advances have already been made on the topic of hierarchical classi- fication learning, we take a step back and examine how a hierarchical classifica- tion problem should be formally defined.

Classification General Classification

From Online to Batch Learning with Cutoff-Averaging

no code implementations NeurIPS 2008 Ofer Dekel

We present cutoff averaging", a technique for converting any conservative online learning algorithm into a batch learning algorithm.

Cannot find the paper you are looking for? You can Submit a new open access paper.