Search Results for author: Chenxin Ma

Found 9 papers, 4 papers with code

Efficient Distributed Hessian Free Algorithm for Large-scale Empirical Risk Minimization via Accumulating Sample Strategy

no code implementations26 Oct 2018 Majid Jahani, Xi He, Chenxin Ma, Aryan Mokhtari, Dheevatsa Mudigere, Alejandro Ribeiro, Martin Takáč

In this paper, we propose a Distributed Accumulated Newton Conjugate gradiEnt (DANCE) method in which sample size is gradually increasing to quickly obtain a solution whose empirical loss is under satisfactory statistical accuracy.

An Accelerated Communication-Efficient Primal-Dual Optimization Framework for Structured Machine Learning

1 code implementation14 Nov 2017 Chenxin Ma, Martin Jaggi, Frank E. Curtis, Nathan Srebro, Martin Takáč

In this paper, an accelerated variant of CoCoA+ is proposed and shown to possess a convergence rate of $\mathcal{O}(1/t^2)$ in terms of reducing suboptimality.

BIG-bench Machine Learning Distributed Optimization

Fast and Safe: Accelerated gradient methods with optimality certificates and underestimate sequences

no code implementations10 Oct 2017 Majid Jahani, Naga Venkata C. Gudapati, Chenxin Ma, Rachael Tappenden, Martin Takáč

In this work we introduce the concept of an Underestimate Sequence (UES), which is motivated by Nesterov's estimate sequence.

Distributed Inexact Damped Newton Method: Data Partitioning and Load-Balancing

no code implementations16 Mar 2016 Chenxin Ma, Martin Takáč

In this paper we study inexact dumped Newton method implemented in a distributed environment.

Distributed Optimization

Distributed Optimization with Arbitrary Local Solvers

1 code implementation13 Dec 2015 Chenxin Ma, Jakub Konečný, Martin Jaggi, Virginia Smith, Michael. I. Jordan, Peter Richtárik, Martin Takáč

To this end, we present a framework for distributed optimization that both allows the flexibility of arbitrary solvers to be used on each (single) machine locally, and yet maintains competitive performance against other state-of-the-art special-purpose distributed methods.

Distributed Optimization

Partitioning Data on Features or Samples in Communication-Efficient Distributed Optimization?

no code implementations22 Oct 2015 Chenxin Ma, Martin Takáč

In this paper we study the effect of the way that the data is partitioned in distributed optimization.

Distributed Optimization

Linear Convergence of the Randomized Feasible Descent Method Under the Weak Strong Convexity Assumption

no code implementations8 Jun 2015 Chenxin Ma, Rachael Tappenden, Martin Takáč

We show that the famous SDCA algorithm for optimizing the SVM dual problem, or the stochastic coordinate descent method for the LASSO problem, fits into the framework of RC-FDM.

Adding vs. Averaging in Distributed Primal-Dual Optimization

1 code implementation12 Feb 2015 Chenxin Ma, Virginia Smith, Martin Jaggi, Michael. I. Jordan, Peter Richtárik, Martin Takáč

Distributed optimization methods for large-scale machine learning suffer from a communication bottleneck.

Distributed Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.