Projection-free Online Learning with Arbitrary Delays

11 Apr 2022  ·  Yuanyu Wan, Yibo Wang, Chang Yao, Wei-Wei Tu, Lijun Zhang ·

Projection-free online learning, which eschews the projection operation via less expensive computations such as linear optimization (LO), has received much interest recently due to its efficiency in handling high-dimensional problems with complex constraints. However, previous studies assume that any queried gradient is revealed immediately, which may not hold in practice and limits their applications. To address this limitation, we generalize the online Frank-Wolfe (OFW) algorithm and the online smooth projection-free (OSPF) algorithm, which are state-of-the-art LO-based projection-free online algorithms for non-smooth and smooth functions respectively, into a delayed setting where queried gradients can be delayed by arbitrary rounds. Specifically, the main idea of our generalized OFW is to perform an update similar to the original OFW after receiving any delayed gradient, and play the latest decision for each round. Moreover, the essential change on OSPF is to replace the sum of queried gradients, which is originally utilized in each update, with the sum of available gradients. Despite their simplicities, our novel analysis shows that under a relatively large amount of delay, the generalized OFW and OSPF enjoy the same regret bound as OFW and OSPF in the non-delayed setting, respectively.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here