You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

no code implementations • 16 Jun 2021 • Zhili Feng, Fred Roosta, David P. Woodruff

In this paper, we present novel dimensionality reduction methods for non-PSD matrices, as well as their ``square-roots", which involve matrices with complex entries.

no code implementations • 18 Oct 2020 • Vektor Dewanto, George Dunn, Ali Eshragh, Marcus Gallagher, Fred Roosta

Reinforcement learning is important part of artificial intelligence.

1 code implementation • ICML 2020 • Rixon Crane, Fred Roosta

Under minimal assumptions, we guarantee global sub-linear convergence of DINO to a first-order stationary point for general non-convex functions and arbitrary data distribution over the network.

Optimization and Control

no code implementations • NeurIPS 2020 • Liam Hodgkinson, Chris van der Heide, Fred Roosta, Michael W. Mahoney

We introduce stochastic normalizing flows, an extension of continuous normalizing flows for maximum likelihood estimation and variational inference (VI) using stochastic differential equations (SDEs).

1 code implementation • 20 Feb 2020 • Russell Tsuchida, Tim Pearce, Chris van der Heide, Fred Roosta, Marcus Gallagher

Secondly, and more generally, we analyse the fixed-point dynamics of iterated kernels corresponding to a broad range of activation functions.

no code implementations • 25 Jan 2020 • Liam Hodgkinson, Robert Salomone, Fred Roosta

Stein importance sampling is a widely applicable technique based on kernelized Stein discrepancy, which corrects the output of approximate sampling algorithms by reweighting the empirical distribution of the samples.

1 code implementation • 29 Nov 2019 • Russell Tsuchida, Fred Roosta, Marcus Gallagher

The model resulting from partially exchangeable priors is a GP, with an additional level of inference in the sense that the prior and posterior predictive distributions require marginalisation over hyperparameters.

no code implementations • 27 Nov 2019 • Ali Eshragh, Fred Roosta, Asef Nazari, Michael W. Mahoney

We first develop a new fast algorithm to estimate the leverage scores of an autoregressive (AR) model in big data regimes.

no code implementations • 29 Sep 2019 • Keith Levin, Fred Roosta, Minh Tang, Michael W. Mahoney, Carey E. Priebe

In both cases, we prove that when the underlying graph is generated according to a latent space model called the random dot product graph, which includes the popular stochastic block model as a special case, an out-of-sample extension based on a least-squares objective obeys a central limit theorem about the true latent position of the out-of-sample vertex.

1 code implementation • 13 Sep 2019 • Yang Liu, Fred Roosta

Recently, stability of Newton-CG under Hessian perturbations, i. e., inexact curvature information, have been extensively studied.

Optimization and Control

no code implementations • 29 Mar 2019 • Liam Hodgkinson, Robert Salomone, Fred Roosta

Theoretical and algorithmic properties of the resulting sampling methods for $ \theta \in [0, 1] $ and a range of step sizes are established.

1 code implementation • NeurIPS 2019 • Rixon Crane, Fred Roosta

For optimization of a sum of functions in a distributed computing environment, we present a novel communication efficient Newton-type algorithm that enjoys a variety of advantages over similar existing methods.

no code implementations • 19 Oct 2018 • Russell Tsuchida, Fred Roosta, Marcus Gallagher

In the analysis of machine learning models, it is often convenient to assume that the parameters are IID.

no code implementations • 30 Sep 2018 • Fred Roosta, Yang Liu, Peng Xu, Michael W. Mahoney

We consider a variant of inexact Newton Method, called Newton-MR, in which the least-squares sub-problems are solved approximately using Minimum Residual method.

1 code implementation • 18 Jul 2018 • Chih-Hao Fang, Sudhir B. Kylasa, Fred Roosta, Michael W. Mahoney, Ananth Grama

First-order optimization methods, such as stochastic gradient descent (SGD) and its variants, are widely used in machine learning applications due to their simplicity and low per-iteration costs.

no code implementations • 23 Aug 2017 • Peng Xu, Fred Roosta, Michael W. Mahoney

In this light, we consider the canonical problem of finite-sum minimization, provide appropriate uniform and non-uniform sub-sampling strategies to construct such Hessian approximations, and obtain optimal iteration complexity for the corresponding sub-sampled trust-region and cubic regularization methods.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.