no code implementations • NeurIPS 2020 • Ashwinkumar Badanidiyuru, Amin Karbasi, Ehsan Kazemi, Jan Vondrak
In this paper, we introduce a novel technique for constrained submodular maximization, inspired by barrier functions in continuous optimization.
no code implementations • 27 Feb 2019 • Vitaly Feldman, Jan Vondrak
Specifically, their bound on the estimation error of any $\gamma$-uniformly stable learning algorithm on $n$ samples and range in $[0, 1]$ is $O(\gamma \sqrt{n \log(1/\delta)} + \sqrt{\log(1/\delta)/n})$ with probability $\geq 1-\delta$.
no code implementations • NeurIPS 2018 • Vitaly Feldman, Jan Vondrak
Specifically, for a loss function with range bounded in $[0, 1]$, the generalization error of a $\gamma$-uniformly stable learning algorithm on $n$ samples is known to be within $O((\gamma +1/n) \sqrt{n \log(1/\delta)})$ of the empirical error with probability at least $1-\delta$.
no code implementations • NeurIPS 2015 • Yaron Singer, Jan Vondrak
We consider the problem of optimizing convex and concave functions with access to an erroneous zeroth-order oracle.
no code implementations • 13 Apr 2015 • Vitaly Feldman, Jan Vondrak
This improves on previous approaches that all showed an upper bound of $O(1/\epsilon^2)$ for submodular and XOS functions.
no code implementations • 28 Sep 2014 • Baharan Mirzasoleiman, Ashwinkumar Badanidiyuru, Amin Karbasi, Jan Vondrak, Andreas Krause
Is it possible to maximize a monotone submodular function faster than the widely used lazy greedy algorithm (also known as accelerated greedy), both in theory and practice?
no code implementations • 12 Jul 2013 • Vitaly Feldman, Jan Vondrak
This is the first algorithm in the PMAC model that over the uniform distribution can achieve a constant approximation factor arbitrarily close to 1 for all submodular functions.
no code implementations • 2 Apr 2013 • Vitaly Feldman, Pravesh Kothari, Jan Vondrak
We show that these structural results can be exploited to give an attribute-efficient PAC learning algorithm for submodular functions running in time $\tilde{O}(n^2) \cdot 2^{O(1/\epsilon^{4})}$.