## Efficient online algorithms for fast-rate regret bounds under sparsity

We consider the online convex optimization problem. In the setting of arbitrary sequences and finite set of parameters, we establish a new fast-rate quantile regret bound... Then we investigate the optimization into the L1-ball by discretizing the parameter space. Our algorithm is projection free and we propose an efficient solution by restarting the algorithm on adaptive discretization grids. In the adversarial setting, we develop an algorithm that achieves several rates of convergence with different dependencies on the sparsity of the objective. In the i.i.d. setting, we establish new risk bounds that are adaptive to the sparsity of the problem and to the regularity of the risk (ranging from a rate 1 / $\sqrt T$ for general convex risk to 1 /T for strongly convex risk). These results generalize previous works on sparse online learning. They are obtained under a weak assumption on the risk ({\L}ojasiewicz's assumption) that allows multiple optima which is crucial when dealing with degenerate situations. read more

PDF Abstract NeurIPS 2018 PDF NeurIPS 2018 Abstract

# Code Add Remove Mark official

No code implementations yet. Submit your code now

# Datasets

Add Datasets introduced or used in this paper

# Results from the Paper Add Remove

Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.