no code implementations • 31 Oct 2013 • Dhruv Mahajan, Nikunj Agrawal, S. Sathiya Keerthi, S. Sundararajan, Leon Bottou
In this paper we give a novel approach to the distributed training of linear classifiers (involving smooth losses and L2 regularization) that is designed to reduce the total communication costs.
no code implementations • 4 Nov 2013 • Dhruv Mahajan, S. Sathiya Keerthi, S. Sundararajan, Leon Bottou
The method has strong convergence properties.
no code implementations • 11 Nov 2013 • P. Balamurugan, Shirish Shevade, S. Sundararajan, S. S Keerthi
Here, we focus on discriminative models for sequence labeling.
no code implementations • 18 May 2014 • Dhruv Mahajan, S. Sathiya Keerthi, S. Sundararajan
In this paper we design a distributed algorithm for $l_1$ regularization that is much better suited for such systems than existing algorithms.
no code implementations • 18 May 2014 • Dhruv Mahajan, S. Sathiya Keerthi, S. Sundararajan
This paper concerns the distributed training of nonlinear kernel machines on Map-Reduce.
no code implementations • 27 Dec 2016 • Vishal Kakkar, Shirish K. Shevade, S. Sundararajan, Dinesh Garg
Batch learning methods for solving the kernelized version of this problem suffer from scalability and may not result in sparse classifiers.
no code implementations • 1 Feb 2018 • Chien-Chih Wang, Kent Loong Tan, Chun-Ting Chen, Yu-Hsiang Lin, S. Sathiya Keerthi, Dhruv Mahajan, S. Sundararajan, Chih-Jen Lin
First, to reduce the communication cost, we propose a diagonalization method such that an approximate Newton direction can be obtained without communication between machines.