1 code implementation • 2 Sep 2023 • Chhavi Sharma, Vishnu Narayanan, P. Balamurugan
To tackle this, we develop a simple and effective switching idea, where a generalized stochastic gradient (GSG) computation oracle is employed to hasten the iterates' progress to a saddle point solution during the initial phase of updates, followed by a switch to the SVRG oracle at an appropriate juncture.
no code implementations • 28 May 2022 • Chhavi Sharma, Vishnu Narayanan, P. Balamurugan
Next, we present a Decentralized Proximal Stochastic Variance Reduced Gradient algorithm with Compression (C-DPSVRG) for finite sum setting which exhibits gradient computation complexity and communication complexity of order $\mathcal{O} \left((1+\delta) \max \{\kappa_f^2, \sqrt{\delta}\kappa^2_f\kappa_g,\kappa_g \} \log\left(\frac{1}{\epsilon}\right) \right)$.
no code implementations • NeurIPS 2016 • P. Balamurugan, Francis Bach
We consider convex-concave saddle-point problems where the objective functions may be split in many components, and extend recent stochastic variance reduction methods (such as SVRG or SAGA) to provide the first large-scale linearly convergent algorithms for this class of problems which is common in machine learning.
no code implementations • 25 Dec 2014 • P. K. Srijith, P. Balamurugan, Shirish Shevade
We provide Gaussian process models based on pseudo-likelihood approximation to perform sequence labeling.
no code implementations • 11 Nov 2013 • P. Balamurugan, Shirish Shevade, S. Sundararajan, S. S Keerthi
Here, we focus on discriminative models for sequence labeling.
no code implementations • 9 Nov 2013 • P. Balamurugan, Shirish Shevade, Sundararajan Sellamanickam
The optimization problem, which in general is not convex, contains the loss terms associated with the labelled and unlabelled examples along with the domain constraints.