Search Results for author: Fred Roosta

Found 16 papers, 6 papers with code

Non-PSD Matrix Sketching with Applications to Regression and Optimization

no code implementations16 Jun 2021 Zhili Feng, Fred Roosta, David P. Woodruff

In this paper, we present novel dimensionality reduction methods for non-PSD matrices, as well as their ``square-roots", which involve matrices with complex entries.

Dimensionality Reduction

DINO: Distributed Newton-Type Optimization Method

1 code implementation ICML 2020 Rixon Crane, Fred Roosta

Under minimal assumptions, we guarantee global sub-linear convergence of DINO to a first-order stationary point for general non-convex functions and arbitrary data distribution over the network.

Optimization and Control

Stochastic Normalizing Flows

no code implementations NeurIPS 2020 Liam Hodgkinson, Chris van der Heide, Fred Roosta, Michael W. Mahoney

We introduce stochastic normalizing flows, an extension of continuous normalizing flows for maximum likelihood estimation and variational inference (VI) using stochastic differential equations (SDEs).

Variational Inference

Avoiding Kernel Fixed Points: Computing with ELU and GELU Infinite Networks

1 code implementation20 Feb 2020 Russell Tsuchida, Tim Pearce, Chris van der Heide, Fred Roosta, Marcus Gallagher

Secondly, and more generally, we analyse the fixed-point dynamics of iterated kernels corresponding to a broad range of activation functions.

Gaussian Processes

The reproducing Stein kernel approach for post-hoc corrected sampling

no code implementations25 Jan 2020 Liam Hodgkinson, Robert Salomone, Fred Roosta

Stein importance sampling is a widely applicable technique based on kernelized Stein discrepancy, which corrects the output of approximate sampling algorithms by reweighting the empirical distribution of the samples.

Richer priors for infinitely wide multi-layer perceptrons

1 code implementation29 Nov 2019 Russell Tsuchida, Fred Roosta, Marcus Gallagher

The model resulting from partially exchangeable priors is a GP, with an additional level of inference in the sense that the prior and posterior predictive distributions require marginalisation over hyperparameters.

LSAR: Efficient Leverage Score Sampling Algorithm for the Analysis of Big Time Series Data

no code implementations27 Nov 2019 Ali Eshragh, Fred Roosta, Asef Nazari, Michael W. Mahoney

We first develop a new fast algorithm to estimate the leverage scores of an autoregressive (AR) model in big data regimes.

Time Series

Limit theorems for out-of-sample extensions of the adjacency and Laplacian spectral embeddings

no code implementations29 Sep 2019 Keith Levin, Fred Roosta, Minh Tang, Michael W. Mahoney, Carey E. Priebe

In both cases, we prove that when the underlying graph is generated according to a latent space model called the random dot product graph, which includes the popular stochastic block model as a special case, an out-of-sample extension based on a least-squares objective obeys a central limit theorem about the true latent position of the out-of-sample vertex.

Dimensionality Reduction Graph Embedding +1

Stability Analysis of Newton-MR Under Hessian Perturbations

1 code implementation13 Sep 2019 Yang Liu, Fred Roosta

Recently, stability of Newton-CG under Hessian perturbations, i. e., inexact curvature information, have been extensively studied.

Optimization and Control

Implicit Langevin Algorithms for Sampling From Log-concave Densities

no code implementations29 Mar 2019 Liam Hodgkinson, Robert Salomone, Fred Roosta

Theoretical and algorithmic properties of the resulting sampling methods for $ \theta \in [0, 1] $ and a range of step sizes are established.

DINGO: Distributed Newton-Type Method for Gradient-Norm Optimization

1 code implementation NeurIPS 2019 Rixon Crane, Fred Roosta

For optimization of a sum of functions in a distributed computing environment, we present a novel communication efficient Newton-type algorithm that enjoys a variety of advantages over similar existing methods.

Distributed Computing

Exchangeability and Kernel Invariance in Trained MLPs

no code implementations19 Oct 2018 Russell Tsuchida, Fred Roosta, Marcus Gallagher

In the analysis of machine learning models, it is often convenient to assume that the parameters are IID.

Newton-MR: Inexact Newton Method With Minimum Residual Sub-problem Solver

no code implementations30 Sep 2018 Fred Roosta, Yang Liu, Peng Xu, Michael W. Mahoney

We consider a variant of inexact Newton Method, called Newton-MR, in which the least-squares sub-problems are solved approximately using Minimum Residual method.

Newton-ADMM: A Distributed GPU-Accelerated Optimizer for Multiclass Classification Problems

1 code implementation18 Jul 2018 Chih-Hao Fang, Sudhir B. Kylasa, Fred Roosta, Michael W. Mahoney, Ananth Grama

First-order optimization methods, such as stochastic gradient descent (SGD) and its variants, are widely used in machine learning applications due to their simplicity and low per-iteration costs.

Classification General Classification

Newton-Type Methods for Non-Convex Optimization Under Inexact Hessian Information

no code implementations23 Aug 2017 Peng Xu, Fred Roosta, Michael W. Mahoney

In this light, we consider the canonical problem of finite-sum minimization, provide appropriate uniform and non-uniform sub-sampling strategies to construct such Hessian approximations, and obtain optimal iteration complexity for the corresponding sub-sampled trust-region and cubic regularization methods.

Cannot find the paper you are looking for? You can Submit a new open access paper.