Search Results for author: Michael W. Mahoney

Found 145 papers, 54 papers with code

A Three-regime Model of Network Pruning

1 code implementation28 May 2023 Yefan Zhou, Yaoqing Yang, Arin Chang, Michael W. Mahoney

Our approach uses temperature-like and load-like parameters to model the impact of neural network (NN) training hyperparameters on pruning performance.

Network Pruning

Constrained Optimization via Exact Augmented Lagrangian and Randomized Iterative Sketching

1 code implementation28 May 2023 Ilgee Hong, Sen Na, Michael W. Mahoney, Mladen Kolar

Our method adaptively controls the accuracy of the randomized solver and the penalty parameters of the exact augmented Lagrangian, to ensure that the inexact Newton direction is a descent direction of the exact augmented Lagrangian.

When are ensembles really effective?

no code implementations21 May 2023 Ryan Theisen, Hyunsuk Kim, Yaoqing Yang, Liam Hodgkinson, Michael W. Mahoney

Ensembling has a long history in statistical data analysis, with many impactful applications.

End-to-end codesign of Hessian-aware quantized neural networks for FPGAs and ASICs

no code implementations13 Apr 2023 Javier Campos, Zhen Dong, Javier Duarte, Amir Gholami, Michael W. Mahoney, Jovan Mitrevski, Nhan Tran

We develop an end-to-end workflow for the training and implementation of co-designed neural networks (NNs) for efficient field-programmable gate array (FPGA) and application-specific integrated circuit (ASIC) hardware.

Quantization

Full Stack Optimization of Transformer Inference: a Survey

no code implementations27 Feb 2023 Sehoon Kim, Coleman Hooper, Thanakul Wattanawong, Minwoo Kang, Ruohan Yan, Hasan Genc, Grace Dinh, Qijing Huang, Kurt Keutzer, Michael W. Mahoney, Yakun Sophia Shao, Amir Gholami

In this work, we survey different approaches for efficient Transformer inference, including: (i) analysis and profiling of the bottlenecks in existing Transformer architectures and their similarities and differences with previous convolutional models; (ii) implications of Transformer architecture on hardware, including the impact of non-linear operations such as Layer Normalization, Softmax, and GELU, as well as linear operations, on hardware design; (iii) approaches for optimizing a fixed Transformer architecture; (iv) challenges in finding the right mapping and scheduling of operations for Transformer models; and (v) approaches for optimizing Transformer models by adapting the architecture using neural architecture search.

Neural Architecture Search Scheduling

Learning Physical Models that Can Respect Conservation Laws

1 code implementation21 Feb 2023 Derek Hansen, Danielle C. Maddix, Shima Alizadeh, Gaurav Gupta, Michael W. Mahoney

We provide a detailed analysis of ProbConserv on learning with the Generalized Porous Medium Equation (GPME), a widely-applicable parameterized family of PDEs that illustrates the qualitative properties of both easier and harder PDEs.

Big Little Transformer Decoder

1 code implementation15 Feb 2023 Sehoon Kim, Karttikeya Mangalam, Suhong Moon, John Canny, Jitendra Malik, Michael W. Mahoney, Amir Gholami, Kurt Keutzer

To address this, we propose Big Little Decoder (BiLD), a framework that can improve inference efficiency and latency for a wide range of text generation applications.

Language Modelling Machine Translation +1

Gated Recurrent Neural Networks with Weighted Time-Delay Feedback

no code implementations1 Dec 2022 N. Benjamin Erichson, Soon Hoe Lim, Michael W. Mahoney

We prove the existence and uniqueness of solutions for the continuous-time model, and we demonstrate that the proposed feedback mechanism can help improve the modeling of long-term dependencies.

Human Activity Recognition speech-recognition +3

Fully Stochastic Trust-Region Sequential Quadratic Programming for Equality-Constrained Optimization Problems

1 code implementation29 Nov 2022 Yuchen Fang, Sen Na, Michael W. Mahoney, Mladen Kolar

We propose a trust-region stochastic sequential quadratic programming algorithm (TR-StoSQP) to solve nonlinear optimization problems with stochastic objectives and deterministic equality constraints.

Monotonicity and Double Descent in Uncertainty Estimation with Gaussian Processes

no code implementations14 Oct 2022 Liam Hodgkinson, Chris van der Heide, Fred Roosta, Michael W. Mahoney

The quality of many modern machine learning models improves as model complexity increases, an effect that has been quantified, for predictive performance, with the non-monotonic double descent learning curve.

Gaussian Processes

Learning differentiable solvers for systems with hard constraints

no code implementations18 Jul 2022 Geoffrey Négiar, Michael W. Mahoney, Aditi S. Krishnapriyan

Our method leverages differentiable optimization and the implicit function theorem to effectively enforce physical constraints.

Dictionary Learning

Adaptive Self-supervision Algorithms for Physics-informed Neural Networks

1 code implementation8 Jul 2022 Shashank Subramanian, Robert M. Kirby, Michael W. Mahoney, Amir Gholami

We find that training vanilla PINNs for these problems can result in up to 70% prediction error in the solution, especially in the regime of low collocation points.

Neurotoxin: Durable Backdoors in Federated Learning

2 code implementations12 Jun 2022 Zhengming Zhang, Ashwinee Panda, Linyue Song, Yaoqing Yang, Michael W. Mahoney, Joseph E. Gonzalez, Kannan Ramchandran, Prateek Mittal

In this type of attack, the goal of the attacker is to use poisoned updates to implant so-called backdoors into the learned model such that, at test time, the model's outputs can be fixed to a given target for certain inputs.

Backdoor Attack Federated Learning

Squeezeformer: An Efficient Transformer for Automatic Speech Recognition

3 code implementations2 Jun 2022 Sehoon Kim, Amir Gholami, Albert Shaw, Nicholas Lee, Karttikeya Mangalam, Jitendra Malik, Michael W. Mahoney, Kurt Keutzer

After re-examining the design choices for both the macro and micro-architecture of Conformer, we propose Squeezeformer which consistently outperforms the state-of-the-art ASR models under the same training schemes.

Automatic Speech Recognition Automatic Speech Recognition (ASR)

Asymptotic Convergence Rate and Statistical Inference for Stochastic Sequential Quadratic Programming

1 code implementation27 May 2022 Sen Na, Michael W. Mahoney

We apply a stochastic sequential quadratic programming (StoSQP) algorithm to solve constrained nonlinear optimization problems, where the objective is stochastic and the constraints are deterministic.

Fat-Tailed Variational Inference with Anisotropic Tail Adaptive Flows

no code implementations16 May 2022 Feynman Liang, Liam Hodgkinson, Michael W. Mahoney

While fat-tailed densities commonly arise as posterior and marginal distributions in robust models and scale mixtures, they present challenges when Gaussian-based variational inference fails to capture tail decay accurately.

Variational Inference

Hessian Averaging in Stochastic Newton Methods Achieves Superlinear Convergence

1 code implementation20 Apr 2022 Sen Na, Michał Dereziński, Michael W. Mahoney

Remarkably, we show that there exists a universal weighted averaging scheme that transitions to local convergence at an optimal stage, and still exhibits a superlinear convergence rate nearly (up to a logarithmic factor) matching that of uniform Hessian averaging.

A Fast Post-Training Pruning Framework for Transformers

1 code implementation29 Mar 2022 Woosuk Kwon, Sehoon Kim, Michael W. Mahoney, Joseph Hassoun, Kurt Keutzer, Amir Gholami

To address this, we propose a fast post-training pruning framework for Transformers that does not require any retraining.

Learning continuous models for continuous physics

no code implementations17 Feb 2022 Aditi S. Krishnapriyan, Alejandro F. Queiruga, N. Benjamin Erichson, Michael W. Mahoney

A core issue with this approach is that ML models are typically trained on discrete data, using ML methodologies that are not aware of underlying continuity properties, which results in models that often do not capture the underlying continuous dynamics of a system of interest.

Evaluating natural language processing models with generalization metrics that do not need access to any training or testing data

1 code implementation6 Feb 2022 Yaoqing Yang, Ryan Theisen, Liam Hodgkinson, Joseph E. Gonzalez, Kannan Ramchandran, Charles H. Martin, Michael W. Mahoney

We also show that among the three HT distributions considered in our paper, the E-TPL fitting of ESDs performs the most robustly when the models are trained in experimental settings, while the PL fitting achieves the best performance on well-trained Huggingface models, and that both E-TPL and PL metrics (which are both shape metrics) outperform scale metrics.

NoisyMix: Boosting Model Robustness to Common Corruptions

no code implementations2 Feb 2022 N. Benjamin Erichson, Soon Hoe Lim, Winnie Xu, Francisco Utrera, Ziang Cao, Michael W. Mahoney

For many real-world applications, obtaining stable and robust statistical performance is more important than simply achieving state-of-the-art predictive test accuracy, and thus robustness of neural networks is an increasingly important topic.

Data Augmentation

Noisy Feature Mixup

2 code implementations ICLR 2022 Soon Hoe Lim, N. Benjamin Erichson, Francisco Utrera, Winnie Xu, Michael W. Mahoney

We introduce Noisy Feature Mixup (NFM), an inexpensive yet effective method for data augmentation that combines the best of interpolation based training and noise injection schemes.

Data Augmentation

What's Hidden in a One-layer Randomly Weighted Transformer?

1 code implementation8 Sep 2021 Sheng Shen, Zhewei Yao, Douwe Kiela, Kurt Keutzer, Michael W. Mahoney

Hidden within a one-layer randomly weighted Transformer, we find that subnetworks that can achieve 29. 45/17. 29 BLEU on IWSLT14/WMT14.

Machine Translation Translation

Characterizing possible failure modes in physics-informed neural networks

1 code implementation NeurIPS 2021 Aditi S. Krishnapriyan, Amir Gholami, Shandian Zhe, Robert M. Kirby, Michael W. Mahoney

We provide evidence that the soft regularization in PINNs, which involves PDE-based differential operators, can introduce a number of subtle problems, including making the problem more ill-conditioned.

Generalization Bounds using Lower Tail Exponents in Stochastic Optimizers

no code implementations2 Aug 2021 Liam Hodgkinson, Umut Şimşekli, Rajiv Khanna, Michael W. Mahoney

Despite the ubiquitous use of stochastic optimization algorithms in machine learning, the precise impact of these algorithms and their dynamics on generalization performance in realistic non-convex settings is still poorly understood.

Generalization Bounds Stochastic Optimization

Taxonomizing local versus global structure in neural network loss landscapes

1 code implementation NeurIPS 2021 Yaoqing Yang, Liam Hodgkinson, Ryan Theisen, Joe Zou, Joseph E. Gonzalez, Kannan Ramchandran, Michael W. Mahoney

Viewing neural network models in terms of their loss landscapes has a long history in the statistical mechanics approach to learning, and in recent years it has received attention within machine learning proper.

Newton-LESS: Sparsification without Trade-offs for the Sketched Newton Update

1 code implementation NeurIPS 2021 Michał Dereziński, Jonathan Lacotte, Mert Pilanci, Michael W. Mahoney

In second-order optimization, a potential bottleneck can be computing the Hessian matrix of the optimized function at every iteration.

Stateful ODE-Nets using Basis Function Expansions

3 code implementations NeurIPS 2021 Alejandro Queiruga, N. Benjamin Erichson, Liam Hodgkinson, Michael W. Mahoney

The recently-introduced class of ordinary differential equation networks (ODE-Nets) establishes a fruitful connection between deep learning and dynamical systems.

Image Classification

Post-mortem on a deep learning contest: a Simpson's paradox and the complementary roles of scale metrics versus shape metrics

no code implementations1 Jun 2021 Charles H. Martin, Michael W. Mahoney

Our results highlight the subtlety of comparing models when both architectures and hyperparameters are varied; the complementary role of implicit scale versus implicit shape parameters in understanding NN model quality; and the need to go beyond one-size-fits-all metrics based on upper bounds from generalization theory to describe the performance of NN models.

Learning Theory

LEAP: Learnable Pruning for Transformer-based Models

1 code implementation30 May 2021 Zhewei Yao, Xiaoxia Wu, Linjian Ma, Sheng Shen, Kurt Keutzer, Michael W. Mahoney, Yuxiong He

Moreover, in order to reduce hyperparameter tuning, a novel adaptive regularization coefficient is deployed to control the regularization penalty adaptively.

QQP

A Survey of Quantization Methods for Efficient Neural Network Inference

no code implementations25 Mar 2021 Amir Gholami, Sehoon Kim, Zhen Dong, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer

Thus, it is not surprising that quantization has emerged recently as an important and very active sub-area of research in the efficient implementation of computations associated with Neural Networks.

Efficient Neural Network Quantization

Hessian Eigenspectra of More Realistic Nonlinear Models

no code implementations NeurIPS 2021 Zhenyu Liao, Michael W. Mahoney

Given an optimization problem, the Hessian matrix and its eigenspectrum can be used in many ways, ranging from designing more efficient second-order algorithms to performing model analysis and regression diagnostics.

A Differential Geometry Perspective on Orthogonal Recurrent Models

no code implementations18 Feb 2021 Omri Azencot, N. Benjamin Erichson, Mirela Ben-Chen, Michael W. Mahoney

In this work, we employ tools and insights from differential geometry to offer a novel perspective on orthogonal RNNs.

Noisy Recurrent Neural Networks

1 code implementation NeurIPS 2021 Soon Hoe Lim, N. Benjamin Erichson, Liam Hodgkinson, Michael W. Mahoney

We provide a general framework for studying recurrent neural networks (RNNs) trained by injecting noise into hidden states.

General Classification

I-BERT: Integer-only BERT Quantization

3 code implementations5 Jan 2021 Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer

Transformer based models, like BERT and RoBERTa, have achieved state-of-the-art results in many Natural Language Processing tasks.

Natural Language Inference Natural Language Understanding +1

Improved guarantees and a multiple-descent curve for Column Subset Selection and the Nystrom method

no code implementations NeurIPS 2020 Michal Derezinski, Rajiv Khanna, Michael W. Mahoney

The Column Subset Selection Problem (CSSP) and the Nystrom method are among the leading tools for constructing small low-rank approximations of large datasets in machine learning and scientific computing.

Sparse sketches with small inversion bias

no code implementations21 Nov 2020 Michał Dereziński, Zhenyu Liao, Edgar Dobriban, Michael W. Mahoney

For a tall $n\times d$ matrix $A$ and a random $m\times n$ sketching matrix $S$, the sketched estimate of the inverse covariance matrix $(A^\top A)^{-1}$ is typically biased: $E[(\tilde A^\top\tilde A)^{-1}]\ne(A^\top A)^{-1}$, where $\tilde A=SA$.

Distributed Optimization

HAWQV3: Dyadic Neural Network Quantization

1 code implementation20 Nov 2020 Zhewei Yao, Zhen Dong, Zhangcheng Zheng, Amir Gholami, Jiali Yu, Eric Tan, Leyuan Wang, Qijing Huang, Yida Wang, Michael W. Mahoney, Kurt Keutzer

Current low-precision quantization algorithms often have the hidden cost of conversion back and forth from floating point to quantized integer values.

Model Compression Quantization

A Statistical Framework for Low-bitwidth Training of Deep Neural Networks

1 code implementation NeurIPS 2020 Jianfei Chen, Yu Gai, Zhewei Yao, Michael W. Mahoney, Joseph E. Gonzalez

We show that the FQT gradient is an unbiased estimator of the QAT gradient, and we discuss the impact of gradient quantization on its variance.

Quantization

Sparse Quantized Spectral Clustering

no code implementations ICLR 2021 Zhenyu Liao, Romain Couillet, Michael W. Mahoney

Given a large data matrix, sparsifying, quantizing, and/or performing other entry-wise nonlinear operations can have numerous benefits, ranging from speeding up iterative algorithms for core numerical linear algebra problems to providing nonlinear filters to design state-of-the-art neural network models.

Quantization

Improving Semi-supervised Federated Learning by Reducing the Gradient Diversity of Models

1 code implementation26 Aug 2020 Zhengming Zhang, Yaoqing Yang, Zhewei Yao, Yujun Yan, Joseph E. Gonzalez, Michael W. Mahoney

Replacing BN with the recently-proposed Group Normalization (GN) can reduce gradient diversity and improve test accuracy.

Federated Learning

Continuous-in-Depth Neural Networks

4 code implementations5 Aug 2020 Alejandro F. Queiruga, N. Benjamin Erichson, Dane Taylor, Michael W. Mahoney

We first show that ResNets fail to be meaningful dynamical integrators in this richer sense.

Numerical Integration

Noise-Response Analysis of Deep Neural Networks Quantifies Robustness and Fingerprints Structural Malware

no code implementations31 Jul 2020 N. Benjamin Erichson, Dane Taylor, Qixuan Wu, Michael W. Mahoney

The ubiquity of deep neural networks (DNNs), cloud-based training, and transfer learning is giving rise to a new cybersecurity frontier in which unsecure DNNs have `structural malware' (i. e., compromised weights and activation pathways).

Transfer Learning

Boundary thickness and robustness in learning models

1 code implementation NeurIPS 2020 Yaoqing Yang, Rajiv Khanna, Yaodong Yu, Amir Gholami, Kurt Keutzer, Joseph E. Gonzalez, Kannan Ramchandran, Michael W. Mahoney

Using these observations, we show that noise-augmentation on mixup training further increases boundary thickness, thereby combating vulnerability to various forms of adversarial attacks and OOD transforms.

Adversarial Defense Data Augmentation

Debiasing Distributed Second Order Optimization with Surrogate Sketching and Scaled Regularization

no code implementations NeurIPS 2020 Michał Dereziński, Burak Bartan, Mert Pilanci, Michael W. Mahoney

In distributed second order optimization, a standard strategy is to average many local estimates, each of which is based on a small sketch or batch of the data.

Point Processes Second-order methods

Good Classifiers are Abundant in the Interpolating Regime

no code implementations22 Jun 2020 Ryan Theisen, Jason M. Klusowski, Michael W. Mahoney

Inspired by the statistical mechanics approach to learning, we formally define and develop a methodology to compute precisely the full distribution of test errors among interpolating classifiers from several model classes.

Learning Theory

Lipschitz Recurrent Neural Networks

1 code implementation ICLR 2021 N. Benjamin Erichson, Omri Azencot, Alejandro Queiruga, Liam Hodgkinson, Michael W. Mahoney

Viewing recurrent neural networks (RNNs) as continuous-time dynamical systems, we propose a recurrent unit that describes the hidden state's evolution with two parts: a well-understood linear component plus a Lipschitz nonlinearity.

Language Modelling Sequential Image Classification

Multiplicative noise and heavy tails in stochastic optimization

no code implementations11 Jun 2020 Liam Hodgkinson, Michael W. Mahoney

Although stochastic optimization is central to modern machine learning, the precise mechanisms underlying its success, and in particular, the precise role of the stochasticity, still remain unclear.

Stochastic Optimization

A Random Matrix Analysis of Random Fourier Features: Beyond the Gaussian Kernel, a Precise Phase Transition, and the Corresponding Double Descent

no code implementations NeurIPS 2020 Zhenyu Liao, Romain Couillet, Michael W. Mahoney

This article characterizes the exact asymptotics of random Fourier feature (RFF) regression, in the realistic setting where the number of data samples $n$, their dimension $p$, and the dimension of feature space $N$ are all large and comparable.

regression

ADAHESSIAN: An Adaptive Second Order Optimizer for Machine Learning

3 code implementations1 Jun 2020 Zhewei Yao, Amir Gholami, Sheng Shen, Mustafa Mustafa, Kurt Keutzer, Michael W. Mahoney

We introduce ADAHESSIAN, a second order stochastic optimization algorithm which dynamically incorporates the curvature of the loss function via ADAptive estimates of the HESSIAN.

BIG-bench Machine Learning Second-order methods +1

Determinantal Point Processes in Randomized Numerical Linear Algebra

no code implementations7 May 2020 Michał Dereziński, Michael W. Mahoney

For example, random sampling with a DPP leads to new kinds of unbiased estimators for least squares, enabling more refined statistical and inferential understanding of these algorithms; a DPP is, in some sense, an optimal randomized algorithm for the Nystr\"om method; and a RandNLA technique called leverage score sampling can be derived as the marginal distribution of a DPP.

Point Processes

PowerNorm: Rethinking Batch Normalization in Transformers

1 code implementation ICML 2020 Sheng Shen, Zhewei Yao, Amir Gholami, Michael W. Mahoney, Kurt Keutzer

To address this, we propose Power Normalization (PN), a novel normalization scheme that resolves this issue by (i) relaxing zero-mean normalization in BN, (ii) incorporating a running quadratic mean instead of per batch statistics to stabilize fluctuations, and (iii) using an approximate backpropagation for incorporating the running statistics in the forward pass.

Machine Translation

Error Estimation for Sketched SVD via the Bootstrap

no code implementations10 Mar 2020 Miles E. Lopes, N. Benjamin Erichson, Michael W. Mahoney

In order to compute fast approximations to the singular value decompositions (SVD) of very large matrices, randomized sketching algorithms have become a leading approach.

Forecasting Sequential Data using Consistent Koopman Autoencoders

1 code implementation ICML 2020 Omri Azencot, N. Benjamin Erichson, Vanessa Lin, Michael W. Mahoney

Recurrent neural networks are widely used on time series data, yet such models often ignore the underlying physical structures in such sequences.

Time Series Analysis

Asymptotic Analysis of Sampling Estimators for Randomized Numerical Linear Algebra Algorithms

no code implementations24 Feb 2020 Ping Ma, Xinlian Zhang, Xin Xing, Jingyi Ma, Michael W. Mahoney

In this article, we develop an asymptotic analysis to derive the distribution of RandNLA sampling estimators for the least-squares problem.

Two-sample testing

Stochastic Normalizing Flows

no code implementations NeurIPS 2020 Liam Hodgkinson, Chris van der Heide, Fred Roosta, Michael W. Mahoney

We introduce stochastic normalizing flows, an extension of continuous normalizing flows for maximum likelihood estimation and variational inference (VI) using stochastic differential equations (SDEs).

Variational Inference

Improved guarantees and a multiple-descent curve for Column Subset Selection and the Nyström method

no code implementations21 Feb 2020 Michał Dereziński, Rajiv Khanna, Michael W. Mahoney

The Column Subset Selection Problem (CSSP) and the Nystr\"om method are among the leading tools for constructing small low-rank approximations of large datasets in machine learning and scientific computing.

Predicting trends in the quality of state-of-the-art neural networks without access to training or testing data

1 code implementation17 Feb 2020 Charles H. Martin, Tongsu, Peng, Michael W. Mahoney

We find that norm based metrics correlate well with reported test accuracies for well-trained models, but that they often cannot distinguish well-trained versus poorly-trained models.

ZeroQ: A Novel Zero Shot Quantization Framework

3 code implementations CVPR 2020 Yaohui Cai, Zhewei Yao, Zhen Dong, Amir Gholami, Michael W. Mahoney, Kurt Keutzer

Importantly, ZeroQ has a very low computational overhead, and it can finish the entire quantization process in less than 30s (0. 5\% of one epoch training time of ResNet50 on ImageNet).

 Ranked #1 on Data Free Quantization on CIFAR10 (CIFAR-10 W8A8 Top-1 Accuracy metric)

Data Free Quantization Neural Network Compression

ANODEV2: A Coupled Neural ODE Framework

1 code implementation NeurIPS 2019 Tianjun Zhang, Zhewei Yao, Amir Gholami, Joseph E. Gonzalez, Kurt Keutzer, Michael W. Mahoney, George Biros

It has been observed that residual networks can be viewed as the explicit Euler discretization of an Ordinary Differential Equation (ODE).

LSAR: Efficient Leverage Score Sampling Algorithm for the Analysis of Big Time Series Data

no code implementations27 Nov 2019 Ali Eshragh, Fred Roosta, Asef Nazari, Michael W. Mahoney

We first develop a new fast algorithm to estimate the leverage scores of an autoregressive (AR) model in big data regimes.

Time Series Analysis

Limit theorems for out-of-sample extensions of the adjacency and Laplacian spectral embeddings

no code implementations29 Sep 2019 Keith Levin, Fred Roosta, Minh Tang, Michael W. Mahoney, Carey E. Priebe

In both cases, we prove that when the underlying graph is generated according to a latent space model called the random dot product graph, which includes the popular stochastic block model as a special case, an out-of-sample extension based on a least-squares objective obeys a central limit theorem about the true latent position of the out-of-sample vertex.

Dimensionality Reduction Graph Embedding +1

Q-BERT: Hessian Based Ultra Low Precision Quantization of BERT

no code implementations12 Sep 2019 Sheng Shen, Zhen Dong, Jiayu Ye, Linjian Ma, Zhewei Yao, Amir Gholami, Michael W. Mahoney, Kurt Keutzer

In particular, we propose a new group-wise quantization scheme, and we use a Hessian based mix-precision method to compress the model further.

Quantization SST-2

Geometric Rates of Convergence for Kernel-based Sampling Algorithms

no code implementations19 Jul 2019 Rajiv Khanna, Liam Hodgkinson, Michael W. Mahoney

The rate of convergence of weighted kernel herding (WKH) and sequential Bayesian quadrature (SBQ), two kernel-based sampling algorithms for estimating integrals with respect to some target probability measure, is investigated.

Statistical guarantees for local graph clustering

no code implementations11 Jun 2019 Wooseok Ha, Kimon Fountoulakis, Michael W. Mahoney

In this paper, we adopt a statistical perspective on local graph clustering, and we analyze the performance of the l1-regularized PageRank method~(Fountoulakis et.

Graph Clustering

Bayesian experimental design using regularized determinantal point processes

1 code implementation10 Jun 2019 Michał Dereziński, Feynman Liang, Michael W. Mahoney

In experimental design, we are given $n$ vectors in $d$ dimensions, and our goal is to select $k\ll n$ of them to perform expensive measurements, e. g., to obtain labels/responses, for a linear regression task.

Experimental Design Point Processes

Residual Networks as Nonlinear Systems: Stability Analysis using Linearization

no code implementations31 May 2019 Kai Rothauge, Zhewei Yao, Zixi Hu, Michael W. Mahoney

We regard pre-trained residual networks (ResNets) as nonlinear systems and use linearization, a common method used in the qualitative analysis of nonlinear systems, to understand the behavior of the networks under small perturbations of the input images.

Distributed estimation of the inverse Hessian by determinantal averaging

no code implementations NeurIPS 2019 Michał Dereziński, Michael W. Mahoney

In distributed optimization and distributed numerical linear algebra, we often encounter an inversion bias: if we want to compute a quantity that depends on the inverse of a sum of distributed matrices, then the sum of the inverses does not equal the inverse of the sum.

Distributed Optimization

Physics-informed Autoencoders for Lyapunov-stable Fluid Flow Prediction

no code implementations26 May 2019 N. Benjamin Erichson, Michael Muehlebach, Michael W. Mahoney

In addition to providing high-profile successes in computer vision and natural language processing, neural networks also provide an emerging set of techniques for scientific problems.

Traditional and Heavy Tailed Self Regularization in Neural Network Models

no code implementations ICLR 2019 Charles H. Martin, Michael W. Mahoney

Random Matrix Theory (RMT) is applied to analyze the weight matrices of Deep Neural Networks (DNNs), including both production quality, pre-trained models such as AlexNet and Inception, and smaller models trained from scratch, such as LeNet5 and a miniature-AlexNet.

JumpReLU: A Retrofit Defense Strategy for Adversarial Attacks

1 code implementation7 Apr 2019 N. Benjamin Erichson, Zhewei Yao, Michael W. Mahoney

To complement these approaches, we propose a very simple and inexpensive strategy which can be used to ``retrofit'' a previously-trained network to improve its resilience to adversarial attacks.

OverSketched Newton: Fast Convex Optimization for Serverless Systems

1 code implementation21 Mar 2019 Vipul Gupta, Swanand Kadhe, Thomas Courtade, Michael W. Mahoney, Kannan Ramchandran

Motivated by recent developments in serverless systems for large-scale computation as well as improvements in scalable randomized matrix algorithms, we develop OverSketched Newton, a randomized Hessian-based optimization algorithm to solve large-scale convex optimization problems in serverless systems.

Distributed Optimization

Inefficiency of K-FAC for Large Batch Size Training

no code implementations14 Mar 2019 Linjian Ma, Gabe Montague, Jiayu Ye, Zhewei Yao, Amir Gholami, Kurt Keutzer, Michael W. Mahoney

In stochastic optimization, using large batch sizes during training can leverage parallel resources to produce faster wall-clock training times per training epoch.

Stochastic Optimization

Shallow Neural Networks for Fluid Flow Reconstruction with Limited Sensors

1 code implementation20 Feb 2019 N. Benjamin Erichson, Lionel Mathelin, Zhewei Yao, Steven L. Brunton, Michael W. Mahoney, J. Nathan Kutz

In many applications, it is important to reconstruct a fluid flow field, or some other high-dimensional state, from limited measurements and limited data.

Traditional and Heavy-Tailed Self Regularization in Neural Network Models

2 code implementations24 Jan 2019 Charles H. Martin, Michael W. Mahoney

Random Matrix Theory (RMT) is applied to analyze the weight matrices of Deep Neural Networks (DNNs), including both production quality, pre-trained models such as AlexNet and Inception, and smaller models trained from scratch, such as LeNet5 and a miniature-AlexNet.

Heavy-Tailed Universality Predicts Trends in Test Accuracies for Very Large Pre-Trained Deep Neural Networks

no code implementations24 Jan 2019 Charles H. Martin, Michael W. Mahoney

In this paper, we show how to use a new Theory of Heavy-Tailed Self-Regularization (HT-SR) to answer this.

On the Computational Inefficiency of Large Batch Sizes for Stochastic Gradient Descent

no code implementations30 Nov 2018 Noah Golmant, Nikita Vemuri, Zhewei Yao, Vladimir Feinberg, Amir Gholami, Kai Rothauge, Michael W. Mahoney, Joseph Gonzalez

Increasing the mini-batch size for stochastic gradient descent offers significant opportunities to reduce wall-clock training time, but there are a variety of theoretical and systems challenges that impede the widespread success of this technique.

Image Classification Image Segmentation +2

A Short Introduction to Local Graph Clustering Methods and Software

1 code implementation17 Oct 2018 Kimon Fountoulakis, David F. Gleich, Michael W. Mahoney

Scalability problems led to the development of local graph clustering algorithms that come with a variety of theoretical guarantees.

Social and Information Networks

Implicit Self-Regularization in Deep Neural Networks: Evidence from Random Matrix Theory and Implications for Learning

3 code implementations2 Oct 2018 Charles H. Martin, Michael W. Mahoney

Random Matrix Theory (RMT) is applied to analyze weight matrices of Deep Neural Networks (DNNs), including both production quality, pre-trained models such as AlexNet and Inception, and smaller models trained from scratch, such as LeNet5 and a miniature-AlexNet.

Newton-MR: Inexact Newton Method With Minimum Residual Sub-problem Solver

no code implementations30 Sep 2018 Fred Roosta, Yang Liu, Peng Xu, Michael W. Mahoney

We consider a variant of inexact Newton Method, called Newton-MR, in which the least-squares sub-problems are solved approximately using Minimum Residual method.

Newton-ADMM: A Distributed GPU-Accelerated Optimizer for Multiclass Classification Problems

1 code implementation18 Jul 2018 Chih-Hao Fang, Sudhir B. Kylasa, Fred Roosta, Michael W. Mahoney, Ananth Grama

First-order optimization methods, such as stochastic gradient descent (SGD) and its variants, are widely used in machine learning applications due to their simplicity and low per-iteration costs.

General Classification

Error Estimation for Randomized Least-Squares Algorithms via the Bootstrap

no code implementations ICML 2018 Miles E. Lopes, Shusen Wang, Michael W. Mahoney

As a more practical alternative, we propose a bootstrap method to compute a posteriori error estimates for randomized LS algorithms.

GPU Accelerated Sub-Sampled Newton's Method

no code implementations26 Feb 2018 Sudhir B. Kylasa, Farbod Roosta-Khorasani, Michael W. Mahoney, Ananth Grama

In particular, in convex settings, we consider variants of classical Newton\textsf{'}s method in which the Hessian and/or the gradient are randomly sub-sampled.

Second-order methods

Hessian-based Analysis of Large Batch Training and Robustness to Adversaries

6 code implementations NeurIPS 2018 Zhewei Yao, Amir Gholami, Qi Lei, Kurt Keutzer, Michael W. Mahoney

Extensive experiments on multiple networks show that saddle-points are not the cause for generalization gap of large batch size training, and the results consistently show that large batch converges to points with noticeably higher Hessian spectrum.

Out-of-sample extension of graph adjacency spectral embedding

no code implementations ICML 2018 Keith Levin, Farbod Roosta-Khorasani, Michael W. Mahoney, Carey E. Priebe

Many popular dimensionality reduction procedures have out-of-sample extensions, which allow a practitioner to apply a learned embedding to observations not seen in the initial training sample.

Dimensionality Reduction

Lectures on Randomized Numerical Linear Algebra

1 code implementation24 Dec 2017 Petros Drineas, Michael W. Mahoney

This chapter is based on lectures on Randomized Numerical Linear Algebra from the 2016 Park City Mathematics Institute summer school on The Mathematics of Data.

Avoiding Synchronization in First-Order Methods for Sparse Convex Optimization

no code implementations17 Dec 2017 Aditya Devarakonda, Kimon Fountoulakis, James Demmel, Michael W. Mahoney

Parallel computing has played an important role in speeding up convex optimization methods for big data analytics and large-scale machine learning (ML).

A Berkeley View of Systems Challenges for AI

no code implementations15 Dec 2017 Ion Stoica, Dawn Song, Raluca Ada Popa, David Patterson, Michael W. Mahoney, Randy Katz, Anthony D. Joseph, Michael Jordan, Joseph M. Hellerstein, Joseph E. Gonzalez, Ken Goldberg, Ali Ghodsi, David Culler, Pieter Abbeel

With the increasing commoditization of computer vision, speech recognition and machine translation systems and the widespread deployment of learning-based back-end technologies such as digital advertising and intelligent infrastructures, AI (Artificial Intelligence) has moved from research labs to production.

Machine Translation speech-recognition +1

Rethinking generalization requires revisiting old ideas: statistical mechanics approaches and complex learning behavior

no code implementations ICLR 2018 Charles H. Martin, Michael W. Mahoney

Using this model, we describe how a very simple application of ideas from the statistical mechanics theory of generalization provides a strong qualitative description of recently-observed empirical results regarding the inability of deep neural networks not to overfit training data, discontinuous learning and sharp transitions in the generalization properties of learning algorithms, etc.

LASAGNE: Locality And Structure Aware Graph Node Embedding

no code implementations17 Oct 2017 Evgeniy Faerman, Felix Borutta, Kimon Fountoulakis, Michael W. Mahoney

For larger graphs with flat NCPs that are strongly expander-like, existing methods lead to random walks that expand rapidly, touching many dissimilar nodes, thereby leading to lower-quality vector representations that are less useful for downstream tasks.

Link Prediction Multi-Label Classification

GIANT: Globally Improved Approximate Newton Method for Distributed Optimization

no code implementations NeurIPS 2018 Shusen Wang, Farbod Roosta-Khorasani, Peng Xu, Michael W. Mahoney

For distributed computing environment, we consider the empirical risk minimization problem and propose a distributed and communication-efficient Newton-type optimization method.

Distributed Computing Distributed Optimization

Second-Order Optimization for Non-Convex Machine Learning: An Empirical Study

no code implementations25 Aug 2017 Peng Xu, Farbod Roosta-Khorasani, Michael W. Mahoney

While first-order optimization methods such as stochastic gradient descent (SGD) are popular in machine learning (ML), they come with well-known deficiencies, including relatively-slow convergence, sensitivity to the settings of hyper-parameters such as learning rate, stagnation at high training errors, and difficulty in escaping flat regions and saddle points.

BIG-bench Machine Learning Second-order methods

Newton-Type Methods for Non-Convex Optimization Under Inexact Hessian Information

no code implementations23 Aug 2017 Peng Xu, Fred Roosta, Michael W. Mahoney

In this light, we consider the canonical problem of finite-sum minimization, provide appropriate uniform and non-uniform sub-sampling strategies to construct such Hessian approximations, and obtain optimal iteration complexity for the corresponding sub-sampled trust-region and cubic regularization methods.

Vocal Bursts Type Prediction

A Bootstrap Method for Error Estimation in Randomized Matrix Multiplication

no code implementations6 Aug 2017 Miles E. Lopes, Shusen Wang, Michael W. Mahoney

In recent years, randomized methods for numerical linear algebra have received growing interest as a general approach to large-scale problems.

Dimensionality Reduction

Skip-Gram − Zipf + Uniform = Vector Additivity

no code implementations ACL 2017 Alex Gittens, Dimitris Achlioptas, Michael W. Mahoney

An unexpected {``}side-effect{''} of such models is that their vectors often exhibit compositionality, i. e., \textit{adding}two word-vectors results in a vector that is only a small angle away from the vector of a word representing the semantic composite of the original words, e. g., {``}man{''} + {``}royal{''} = {``}king{''}.

Dimensionality Reduction Word Embeddings

Capacity Releasing Diffusion for Speed and Locality

no code implementations19 Jun 2017 Di Wang, Kimon Fountoulakis, Monika Henzinger, Michael W. Mahoney, Satish Rao

Thus, our CRD Process is the first local graph clustering algorithm that is not subject to the well-known quadratic Cheeger barrier.

Graph Clustering

Scalable Kernel K-Means Clustering with Nystrom Approximation: Relative-Error Bounds

no code implementations9 Jun 2017 Shusen Wang, Alex Gittens, Michael W. Mahoney

This work analyzes the application of this paradigm to kernel $k$-means clustering, and shows that applying the linear $k$-means clustering algorithm to $\frac{k}{\epsilon} (1 + o(1))$ features constructed using a so-called rank-restricted Nystr\"om approximation results in cluster assignments that satisfy a $1 + \epsilon$ approximation ratio in terms of the kernel $k$-means cost function, relative to the guarantee provided by the same algorithm without the use of the Nystr\"om method.

Mapping the Similarities of Spectra: Global and Locally-biased Approaches to SDSS Galaxy Data

no code implementations13 Sep 2016 David Lawlor, Tamás Budavári, Michael W. Mahoney

This technique permits us to characterize empirically the natural variations in observed spectra data, and we illustrate how this approach can be used in an exploratory manner to highlight both large-scale global as well as small-scale local structure in Sloan Digital Sky Survey (SDSS) data.

Lecture Notes on Spectral Graph Methods

no code implementations17 Aug 2016 Michael W. Mahoney

These are lecture notes that are based on the lectures from a class I taught on the topic of Spectral Graph Methods at UC Berkeley during the Spring 2015 semester.

Lecture Notes on Randomized Linear Algebra

4 code implementations16 Aug 2016 Michael W. Mahoney

These are lecture notes that are based on the lectures from a class I taught on the topic of Randomized Linear Algebra (RLA) at UC Berkeley during the Fall 2013 semester.

Matrix Factorization at Scale: a Comparison of Scientific Data Analytics in Spark and C+MPI Using Three Case Studies

1 code implementation5 Jul 2016 Alex Gittens, Aditya Devarakonda, Evan Racah, Michael Ringenburg, Lisa Gerhardt, Jey Kottalam, Jialin Liu, Kristyn Maschhoff, Shane Canon, Jatin Chhugani, Pramod Sharma, Jiyan Yang, James Demmel, Jim Harrell, Venkat Krishnamurthy, Michael W. Mahoney, Prabhat

We explore the trade-offs of performing linear algebra using Apache Spark, compared to traditional C and MPI implementations on HPC platforms.

Distributed, Parallel, and Cluster Computing G.1.3; C.2.4

Sub-sampled Newton Methods with Non-uniform Sampling

no code implementations NeurIPS 2016 Peng Xu, Jiyan Yang, Farbod Roosta-Khorasani, Christopher Ré, Michael W. Mahoney

As second-order methods prove to be effective in finding the minimizer to a high-precision, in this work, we propose randomized Newton-type algorithms that exploit \textit{non-uniform} sub-sampling of $\{\nabla^2 f_i(w)\}_{i=1}^{n}$, as well as inexact updates, as means to reduce the computational complexity.

Second-order methods

FLAG n' FLARE: Fast Linearly-Coupled Adaptive Gradient Methods

no code implementations26 May 2016 Xiang Cheng, Farbod Roosta-Khorasani, Stefan Palombo, Peter L. Bartlett, Michael W. Mahoney

We consider first order gradient methods for effectively optimizing a composite objective in the form of a sum of smooth and, potentially, non-smooth functions.

Sub-Sampled Newton Methods I: Globally Convergent Algorithms

no code implementations18 Jan 2016 Farbod Roosta-Khorasani, Michael W. Mahoney

As a remedy, for all of our algorithms, we also give global convergence results for the case of inexact updates where such linear system is solved only approximately.

Sub-Sampled Newton Methods II: Local Convergence Rates

no code implementations18 Jan 2016 Farbod Roosta-Khorasani, Michael W. Mahoney

In such problems, sub-sampling as a way to reduce $n$ can offer great amount of computational efficiency.

Second-order methods

Fast Randomized Kernel Ridge Regression with Statistical Guarantees

no code implementations NeurIPS 2015 Ahmed Alaoui, Michael W. Mahoney

One approach to improving the running time of kernel-based methods is to build a small sketch of the kernel matrix and use it in lieu of the full matrix in the machine learning task of interest.

regression

A Local Perspective on Community Structure in Multilayer Networks

no code implementations18 Oct 2015 Lucas G. S. Jeub, Michael W. Mahoney, Peter J. Mucha, Mason A. Porter

The analysis of multilayer networks is among the most active areas of network science, and there are now several methods to detect dense "communities" of nodes in multilayer networks.

Social and Information Networks Probability Adaptation and Self-Organizing Systems Data Analysis, Statistics and Probability Physics and Society

Optimal Subsampling Approaches for Large Sample Linear Regression

no code implementations17 Sep 2015 Rong Zhu, Ping Ma, Michael W. Mahoney, Bin Yu

For unweighted estimation algorithm, we show that its resulting subsample estimator is not consistent to the full sample OLS estimator.

regression

Block Basis Factorization for Scalable Kernel Matrix Evaluation

no code implementations3 May 2015 Ruoxi Wang, Yingzhou Li, Michael W. Mahoney, Eric Darve

Kernel methods are widespread in machine learning; however, they are limited by the quadratic complexity of the construction, application, and storage of kernel matrices.

BIG-bench Machine Learning

Weighted SGD for $\ell_p$ Regression with Randomized Preconditioning

no code implementations12 Feb 2015 Jiyan Yang, Yin-Lam Chow, Christopher Ré, Michael W. Mahoney

We aim to bridge the gap between these two methods in solving constrained overdetermined linear regression problems---e. g., $\ell_2$ and $\ell_1$ regression problems.

regression

Implementing Randomized Matrix Algorithms in Parallel and Distributed Environments

no code implementations10 Feb 2015 Jiyan Yang, Xiangrui Meng, Michael W. Mahoney

and demonstrate that $\ell_1$ and $\ell_2$ regression problems can be solved to low, medium, or high precision in existing distributed systems on up to terabyte-sized data.

regression

Fast Randomized Kernel Methods With Statistical Guarantees

no code implementations2 Nov 2014 Ahmed El Alaoui, Michael W. Mahoney

By extending the notion of \emph{statistical leverage scores} to the setting of kernel ridge regression, our main statistical result is to identify an importance sampling distribution that reduces the size of the sketch (i. e., the required number of columns to be sampled) to the \emph{effective dimensionality} of the problem.

Random Laplace Feature Maps for Semigroup Kernels on Histograms

no code implementations CVPR 2014 Jiyan Yang, Vikas Sindhwani, Quanfu Fan, Haim Avron, Michael W. Mahoney

With the goal of accelerating the training and testing complexity of nonlinear kernel methods, several recent papers have proposed explicit embeddings of the input data into low-dimensional feature spaces, where fast linear methods can instead be used to generate approximate solutions.

Event Detection Image Classification

Think Locally, Act Locally: The Detection of Small, Medium-Sized, and Large Communities in Large Networks

1 code implementation15 Mar 2014 Lucas G. S. Jeub, Prakash Balachandran, Mason A. Porter, Peter J. Mucha, Michael W. Mahoney

In this paper, we adopt a complementary perspective that "communities" are associated with bottlenecks of locally-biased dynamical processes that begin at seed sets of nodes, and we employ several different community-identification procedures (using diffusion-based and geodesic-based dynamics) to investigate community quality as a function of community size.

Social and Information Networks Disordered Systems and Neural Networks Combinatorics Adaptation and Self-Organizing Systems Physics and Society

A Statistical Perspective on Algorithmic Leveraging

no code implementations23 Jun 2013 Ping Ma, Michael W. Mahoney, Bin Yu

A detailed empirical evaluation of existing leverage-based methods as well as these two new methods is carried out on both synthetic and real data sets.

Quantile Regression for Large-scale Applications

no code implementations1 May 2013 Jiyan Yang, Xiangrui Meng, Michael W. Mahoney

Our empirical evaluation illustrates that our algorithm is competitive with the best previous work on small to medium-sized problems, and that in addition it can be implemented in MapReduce-like environments and applied to terabyte-sized problems.

regression

Semi-supervised Eigenvectors for Large-scale Locally-biased Learning

no code implementations28 Apr 2013 Toke J. Hansen, Michael W. Mahoney

For example, one might be interested in the clustering structure of a data graph near a prespecified "seed set" of nodes, or one might be interested in finding partitions in an image that are near a prespecified "ground truth" set of pixels.

BIG-bench Machine Learning

Revisiting the Nystrom Method for Improved Large-Scale Machine Learning

no code implementations7 Mar 2013 Alex Gittens, Michael W. Mahoney

Our main results consist of an empirical evaluation of the performance quality and running time of sampling and projection methods on a diverse suite of SPSD matrices.

BIG-bench Machine Learning

Semi-supervised Eigenvectors for Locally-biased Learning

no code implementations NeurIPS 2012 Toke Hansen, Michael W. Mahoney

In many applications, one has information, e. g., labels that are provided in a semi-supervised manner, about a specific target region of a large data set, and one wants to perform machine learning and data analysis tasks nearby that pre-specified target region.

BIG-bench Machine Learning

The Fast Cauchy Transform and Faster Robust Linear Regression

no code implementations19 Jul 2012 Kenneth L. Clarkson, Petros Drineas, Malik Magdon-Ismail, Michael W. Mahoney, Xiangrui Meng, David P. Woodruff

We provide fast algorithms for overconstrained $\ell_p$ regression and related problems: for an $n\times d$ input matrix $A$ and vector $b\in\mathbb{R}^n$, in $O(nd\log n)$ time we reduce the problem $\min_{x\in\mathbb{R}^d} \|Ax-b\|_p$ to the same problem with input matrix $\tilde A$ of dimension $s \times d$ and corresponding $\tilde b$ of dimension $s\times 1$.

regression

Regularized Laplacian Estimation and Fast Eigenvector Approximation

no code implementations NeurIPS 2011 Patrick O. Perry, Michael W. Mahoney

Conversely, it will imply that the solution to this regularized estimation problem can be computed very quickly by running, e. g., the fast diffusion-based PageRank procedure for computing an approximation to the first nontrivial eigenvector of the graph Laplacian.

regression

Randomized Dimensionality Reduction for k-means Clustering

no code implementations13 Oct 2011 Christos Boutsidis, Anastasios Zouzias, Michael W. Mahoney, Petros Drineas

On the other hand, two provably accurate feature extraction methods for $k$-means clustering are known in the literature; one is based on random projections and the other is based on the singular value decomposition (SVD).

Dimensionality Reduction feature selection

Randomized algorithms for matrices and data

no code implementations29 Apr 2011 Michael W. Mahoney

This monograph will provide a detailed overview of recent work on the theory of randomized matrix algorithms as well as the application of those ideas to the solution of practical problems in large-scale data analysis.

Data Structures and Algorithms

CUR from a Sparse Optimization Viewpoint

no code implementations NeurIPS 2010 Jacob Bien, Ya Xu, Michael W. Mahoney

The CUR decomposition provides an approximation of a matrix X that has low reconstruction error and that is sparse in the sense that the resulting approximation lies in the span of only a few columns of X.

Effective Resistances, Statistical Leverage, and Applications to Linear Equation Solving

1 code implementation18 May 2010 Petros Drineas, Michael W. Mahoney

Our first and main result is a simple algorithm to approximate the solution to a set of linear equations defined by a Laplacian (for a graph $G$ with $n$ nodes and $m \le n^2$ edges) constraint matrix.

Numerical Analysis

Community Structure in Large Networks: Natural Cluster Sizes and the Absence of Large Well-Defined Clusters

no code implementations8 Oct 2008 Jure Leskovec, Kevin J. Lang, Anirban Dasgupta, Michael W. Mahoney

A large body of work has been devoted to defining and identifying clusters or communities in social and information networks.

Data Structures and Algorithms Data Analysis, Statistics and Probability Physics and Society

Cannot find the paper you are looking for? You can Submit a new open access paper.