no code implementations • NeurIPS 2015 • Daniel Vainsencher, Han Liu, Tong Zhang
Abstract We propose a family of non-uniform sampling strategies to provably speed up a class of stochastic optimization algorithms with linear convergence including Stochastic Variance Reduced Gradient (SVRG) and Stochastic Dual Coordinate Ascent (SDCA).
no code implementations • NeurIPS 2013 • Daniel Vainsencher, Shie Mannor, Huan Xu
We demonstrate the robustness benefits of our approach with some experimental results and prove for the important case of clustering that our approach has a non-trivial breakdown point, i. e., is guaranteed to be robust to a fixed percentage of adversarial unbounded outliers.