Second-order methods

44 papers with code • 0 benchmarks • 0 datasets

Use second-order statistics to process data.

Most implemented papers

Second-Order Stochastic Optimization for Machine Learning in Linear Time

brianbullins/lissa_code 12 Feb 2016

First-order stochastic methods are the state-of-the-art in large-scale machine learning optimization owing to efficient per-iteration complexity.

ADAHESSIAN: An Adaptive Second Order Optimizer for Machine Learning

amirgholami/adahessian 1 Jun 2020

We introduce ADAHESSIAN, a second order stochastic optimization algorithm which dynamically incorporates the curvature of the loss function via ADAptive estimates of the HESSIAN.

Newtonian Monte Carlo: single-site MCMC meets second-order gradient methods

Johanpdrsn/Newtonian-Monte-Carlo 15 Jan 2020

NMC is similar to the Newton-Raphson update in optimization where the second order gradient is used to automatically scale the step size in each dimension.

Low Rank Saddle Free Newton: A Scalable Method for Stochastic Nonconvex Optimization

tomoleary/hessianlearn 7 Feb 2020

In this work we motivate the extension of Newton methods to the SA regime, and argue for the use of the scalable low rank saddle free Newton (LRSFN) method, which avoids forming the Hessian in favor of making a low rank approximation.

M-FAC: Efficient Matrix-Free Approximations of Second-Order Information

IST-DASLab/M-FAC NeurIPS 2021

We propose two new algorithms as part of a framework called M-FAC: the first algorithm is tailored towards network compression and can compute the IHVP for dimension $d$, if the Hessian is given as a sum of $m$ rank-one matrices, using $O(dm^2)$ precomputation, $O(dm)$ cost for computing the IHVP, and query cost $O(m)$ for any single element of the inverse Hessian.

Near out-of-distribution detection for low-resolution radar micro-Doppler signatures

blupblupblup/doppler-signatures-generation 12 May 2022

We emphasize the relevance of OODD and its specific supervision requirements for the detection of a multimodal, diverse targets class among other similar radar targets and clutter in real-life critical systems.

A Gauss-Newton Approach for Min-Max Optimization in Generative Adversarial Networks

neelmishra/gauss-newton-based-minimax-solver 10 Apr 2024

It modifies the Gauss-Newton method to approximate the min-max Hessian and uses the Sherman-Morrison inversion formula to calculate the inverse.

Optimization Methods for Supervised Machine Learning: From Linear Models to Deep Learning

GCaptainNemo/optimization-project 30 Jun 2017

We then discuss some of the distinctive features of these optimization problems, focusing on the examples of logistic regression and the training of deep neural networks.

Online Second Order Methods for Non-Convex Stochastic Optimizations

lixilinx/psgd_tf 26 Mar 2018

This paper proposes a family of online second order methods for possibly non-convex stochastic optimizations based on the theory of preconditioned stochastic gradient descent (PSGD), which can be regarded as an enhance stochastic Newton method with the ability to handle gradient noise and non-convexity simultaneously.

Large batch size training of neural networks with adversarial training and second-order information

amirgholami/hessianflow ICLR 2019

Our method exceeds the performance of existing solutions in terms of both accuracy and the number of SGD iterations (up to 1\% and $5\times$, respectively).