Continuous Submodular Maximization: Beyond DR-Submodularity

NeurIPS 2020  ·  Moran Feldman, Amin Karbasi ·

In this paper, we propose the first continuous optimization algorithms that achieve a constant factor approximation guarantee for the problem of monotone continuous submodular maximization subject to a linear constraint. We first prove that a simple variant of the vanilla coordinate ascent, called Coordinate-Ascent+, achieves a $(\frac{e-1}{2e-1}-\varepsilon)$-approximation guarantee while performing $O(n/\varepsilon)$ iterations, where the computational complexity of each iteration is roughly $O(n/\sqrt{\varepsilon}+n\log n)$ (here, $n$ denotes the dimension of the optimization problem). We then propose Coordinate-Ascent++, that achieves the tight $(1-1/e-\varepsilon)$-approximation guarantee while performing the same number of iterations, but at a higher computational complexity of roughly $O(n^3/\varepsilon^{2.5} + n^3 \log n / \varepsilon^2)$ per iteration. However, the computation of each round of Coordinate-Ascent++ can be easily parallelized so that the computational cost per machine scales as $O(n/\sqrt{\varepsilon}+n\log n)$.

PDF Abstract NeurIPS 2020 PDF NeurIPS 2020 Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here