Search Results for author: Haim Avron

Found 24 papers, 5 papers with code

Scaling Neural Tangent Kernels via Sketching and Random Features

no code implementations15 Jun 2021 Amir Zandieh, Insu Han, Haim Avron, Neta Shoham, Chaewon Kim, Jinwoo Shin

To accelerate learning with NTK, we design a near input-sparsity time approximation algorithm for NTK, by sketching the polynomial expansions of arc-cosine kernels: our sketch for the convolutional counterpart of NTK (CNTK) can transform any image using a linear runtime in the number of pixels.

Random Features for the Neural Tangent Kernel

1 code implementation3 Apr 2021 Insu Han, Haim Avron, Neta Shoham, Chaewon Kim, Jinwoo Shin

We combine random features of the arc-cosine kernels with a sketching-based algorithm which can run in linear with respect to both the number of data points and input dimension.

Gauss-Legendre Features for Gaussian Process Regression

no code implementations4 Jan 2021 Paz Fink Shustin, Haim Avron

Our method is very much inspired by the well-known random Fourier features approach, which also builds low-rank approximations via numerical integration.

Gaussian Processes Numerical Integration

Faster Randomized Infeasible Interior Point Methods for Tall/Wide Linear Programs

no code implementations NeurIPS 2020 Agniva Chowdhury, Palma London, Haim Avron, Petros Drineas

Linear programming (LP) is used in many machine learning applications, such as $\ell_1$-regularized SVMs, basis pursuit, nonnegative matrix factorization, etc.

Experimental Design for Overparameterized Learning with Application to Single Shot Deep Active Learning

no code implementations27 Sep 2020 Neta Shoham, Haim Avron

Unfortunately, classical theory on optimal experimental design focuses on selecting examples in order to learn underparameterized (and thus, non-interpolative) models, while modern machine learning models such as deep neural networks are overparameterized, and oftentimes are trained to be interpolative.

Active Learning

Dynamic Graph Convolutional Networks Using the Tensor M-Product

1 code implementation ICLR 2020 Osman Asif Malik, Shashanka Ubaru, Lior Horesh, Misha E. Kilmer, Haim Avron

In recent years, a variety of graph neural networks (GNNs) have been successfully applied for representation learning and prediction on such graphs.

Edge Classification Link Prediction +1

Preconditioned Riemannian Optimization on the Generalized Stiefel Manifold

no code implementations5 Feb 2019 Boris Shustin, Haim Avron

In this paper we develop the geometric components required to perform Riemannian optimization on the generalized Stiefel manifold equipped with a non-standard metric, and illustrate theoretically and numerically the use of those components and the effect of Riemannian preconditioning for solving optimization problems on the generalized Stiefel manifold.

Dimensionality Reduction Riemannian optimization

A Universal Sampling Method for Reconstructing Signals with Simple Fourier Transforms

no code implementations20 Dec 2018 Haim Avron, Michael Kapralov, Cameron Musco, Christopher Musco, Ameya Velingker, Amir Zandieh

We formalize this intuition by showing that, roughly, a continuous signal from a given class can be approximately reconstructed using a number of samples proportional to the *statistical dimension* of the allowed power spectrum of that class.

Stable Tensor Neural Networks for Rapid Deep Learning

no code implementations15 Nov 2018 Elizabeth Newman, Lior Horesh, Haim Avron, Misha Kilmer

To exemplify the elegant, matrix-mimetic algebraic structure of our $t$-NNs, we expand on recent work (Haber and Ruthotto, 2017) which interprets deep neural networks as discretizations of non-linear differential equations and introduces stable neural networks which promote superior generalization.

Random Fourier Features for Kernel Ridge Regression: Approximation Bounds and Statistical Guarantees

no code implementations ICML 2017 Haim Avron, Michael Kapralov, Cameron Musco, Christopher Musco, Ameya Velingker, Amir Zandieh

Qualitatively, our results are twofold: on the one hand, we show that random Fourier feature approximation can provably speed up kernel ridge regression under reasonable assumptions.

Sketching for Principal Component Regression

no code implementations7 Mar 2018 Liron Mor-Yosef, Haim Avron

Principal component regression (PCR) is a useful method for regularizing linear regression.

Stochastic Chebyshev Gradient Descent for Spectral Optimization

1 code implementation NeurIPS 2018 Insu Han, Haim Avron, Jinwoo Shin

A large class of machine learning techniques requires the solution of optimization problems involving spectral functions of parametric matrices, e. g. log-determinant and nuclear norm.

Should You Derive, Or Let the Data Drive? An Optimization Framework for Hybrid First-Principles Data-Driven Modeling

no code implementations12 Nov 2017 Remi R. Lam, Lior Horesh, Haim Avron, Karen E. Willcox

This work takes a different perspective and targets the construction of a correction model operator with implicit attributes.

Decision Making

Experimental Design for Non-Parametric Correction of Misspecified Dynamical Models

no code implementations2 May 2017 Gal Shulkind, Lior Horesh, Haim Avron

We consider a class of misspecified dynamical models where the governing term is only approximately known.

Sharper Bounds for Regularized Data Fitting

no code implementations10 Nov 2016 Haim Avron, Kenneth L. Clarkson, David P. Woodruff

We study regularization both in a fairly broad setting, and in the specific context of the popular and widely used technique of ridge regularization; for the latter, as applied to each of these problems, we show algorithmic resource bounds in which the {\em statistical dimension} appears in places where in previous bounds the rank would appear.

Faster Kernel Ridge Regression Using Sketching and Preconditioning

no code implementations10 Nov 2016 Haim Avron, Kenneth L. Clarkson, David P. Woodruff

The preconditioner is based on random feature maps, such as random Fourier features, which have recently emerged as a powerful technique for speeding up and scaling the training of kernel-based methods, such as kernel ridge regression, by resorting to approximations.

Hierarchically Compositional Kernels for Scalable Nonparametric Learning

no code implementations2 Aug 2016 Jie Chen, Haim Avron, Vikas Sindhwani

We propose a novel class of kernels to alleviate the high computational cost of large-scale nonparametric learning with kernel methods.

Approximating the Spectral Sums of Large-scale Matrices using Chebyshev Approximations

1 code implementation3 Jun 2016 Insu Han, Dmitry Malioutov, Haim Avron, Jinwoo Shin

Computation of the trace of a matrix function plays an important role in many scientific computing applications, including applications in machine learning, computational physics (e. g., lattice quantum chromodynamics), network analysis and computational biology (e. g., protein folding), just to name a few application areas.

Data Structures and Algorithms

Quasi-Monte Carlo Feature Maps for Shift-Invariant Kernels

no code implementations29 Dec 2014 Haim Avron, Vikas Sindhwani, Jiyan Yang, Michael Mahoney

These approximate feature maps arise as Monte Carlo approximations to integral representations of shift-invariant kernel functions (e. g., Gaussian kernel).

Subspace Embeddings for the Polynomial Kernel

no code implementations NeurIPS 2014 Haim Avron, Huy Nguyen, David Woodruff

Sketching is a powerful dimensionality reduction tool for accelerating statistical learning algorithms.

Dimensionality Reduction

Random Laplace Feature Maps for Semigroup Kernels on Histograms

no code implementations CVPR 2014 Jiyan Yang, Vikas Sindhwani, Quanfu Fan, Haim Avron, Michael W. Mahoney

With the goal of accelerating the training and testing complexity of nonlinear kernel methods, several recent papers have proposed explicit embeddings of the input data into low-dimensional feature spaces, where fast linear methods can instead be used to generate approximate solutions.

Event Detection Image Classification

Sketching Structured Matrices for Faster Nonlinear Regression

no code implementations NeurIPS 2013 Haim Avron, Vikas Sindhwani, David Woodruff

Motivated by the desire to extend fast randomized techniques to nonlinear $l_p$ regression, we consider a class of structured regression problems.

Cannot find the paper you are looking for? You can Submit a new open access paper.