Search Results for author: Farbod Roosta-Khorasani

Found 10 papers, 0 papers with code

GPU Accelerated Sub-Sampled Newton's Method

no code implementations26 Feb 2018 Sudhir B. Kylasa, Farbod Roosta-Khorasani, Michael W. Mahoney, Ananth Grama

In particular, in convex settings, we consider variants of classical Newton\textsf{'}s method in which the Hessian and/or the gradient are randomly sub-sampled.

Second-order methods

Out-of-sample extension of graph adjacency spectral embedding

no code implementations ICML 2018 Keith Levin, Farbod Roosta-Khorasani, Michael W. Mahoney, Carey E. Priebe

Many popular dimensionality reduction procedures have out-of-sample extensions, which allow a practitioner to apply a learned embedding to observations not seen in the initial training sample.

Dimensionality Reduction Position

Invariance of Weight Distributions in Rectified MLPs

no code implementations ICML 2018 Russell Tsuchida, Farbod Roosta-Khorasani, Marcus Gallagher

An interesting approach to analyzing neural networks that has received renewed attention is to examine the equivalent kernel of the neural network.

GIANT: Globally Improved Approximate Newton Method for Distributed Optimization

no code implementations NeurIPS 2018 Shusen Wang, Farbod Roosta-Khorasani, Peng Xu, Michael W. Mahoney

For distributed computing environment, we consider the empirical risk minimization problem and propose a distributed and communication-efficient Newton-type optimization method.

Distributed Computing Distributed Optimization

Second-Order Optimization for Non-Convex Machine Learning: An Empirical Study

no code implementations25 Aug 2017 Peng Xu, Farbod Roosta-Khorasani, Michael W. Mahoney

While first-order optimization methods such as stochastic gradient descent (SGD) are popular in machine learning (ML), they come with well-known deficiencies, including relatively-slow convergence, sensitivity to the settings of hyper-parameters such as learning rate, stagnation at high training errors, and difficulty in escaping flat regions and saddle points.

BIG-bench Machine Learning Second-order methods

Sub-sampled Newton Methods with Non-uniform Sampling

no code implementations NeurIPS 2016 Peng Xu, Jiyan Yang, Farbod Roosta-Khorasani, Christopher Ré, Michael W. Mahoney

As second-order methods prove to be effective in finding the minimizer to a high-precision, in this work, we propose randomized Newton-type algorithms that exploit \textit{non-uniform} sub-sampling of $\{\nabla^2 f_i(w)\}_{i=1}^{n}$, as well as inexact updates, as means to reduce the computational complexity.

Second-order methods

FLAG n' FLARE: Fast Linearly-Coupled Adaptive Gradient Methods

no code implementations26 May 2016 Xiang Cheng, Farbod Roosta-Khorasani, Stefan Palombo, Peter L. Bartlett, Michael W. Mahoney

We consider first order gradient methods for effectively optimizing a composite objective in the form of a sum of smooth and, potentially, non-smooth functions.

Sub-Sampled Newton Methods I: Globally Convergent Algorithms

no code implementations18 Jan 2016 Farbod Roosta-Khorasani, Michael W. Mahoney

As a remedy, for all of our algorithms, we also give global convergence results for the case of inexact updates where such linear system is solved only approximately.

Sub-Sampled Newton Methods II: Local Convergence Rates

no code implementations18 Jan 2016 Farbod Roosta-Khorasani, Michael W. Mahoney

In such problems, sub-sampling as a way to reduce $n$ can offer great amount of computational efficiency.

Computational Efficiency Second-order methods

Cannot find the paper you are looking for? You can Submit a new open access paper.